Merge 5.15.22 into android13-5.15
Changes in 5.15.22
drm/i915: Disable DSB usage for now
selinux: fix double free of cond_list on error paths
audit: improve audit queue handling when "audit=1" on cmdline
ipc/sem: do not sleep with a spin lock held
spi: stm32-qspi: Update spi registering
ASoC: hdmi-codec: Fix OOB memory accesses
ASoC: ops: Reject out of bounds values in snd_soc_put_volsw()
ASoC: ops: Reject out of bounds values in snd_soc_put_volsw_sx()
ASoC: ops: Reject out of bounds values in snd_soc_put_xr_sx()
ALSA: usb-audio: Correct quirk for VF0770
ALSA: hda: Fix UAF of leds class devs at unbinding
ALSA: hda: realtek: Fix race at concurrent COEF updates
ALSA: hda/realtek: Add quirk for ASUS GU603
ALSA: hda/realtek: Add missing fixup-model entry for Gigabyte X570 ALC1220 quirks
ALSA: hda/realtek: Fix silent output on Gigabyte X570S Aorus Master (newer chipset)
ALSA: hda/realtek: Fix silent output on Gigabyte X570 Aorus Xtreme after reboot from Windows
btrfs: don't start transaction for scrub if the fs is mounted read-only
btrfs: fix deadlock between quota disable and qgroup rescan worker
btrfs: fix use-after-free after failure to create a snapshot
Revert "fs/9p: search open fids first"
drm/nouveau: fix off by one in BIOS boundary checking
drm/i915/adlp: Fix TypeC PHY-ready status readout
drm/amd/pm: correct the MGpuFanBoost support for Beige Goby
drm/amd/display: watermark latencies is not enough on DCN31
drm/amd/display: Force link_rate as LINK_RATE_RBR2 for 2018 15" Apple Retina panels
nvme-fabrics: fix state check in nvmf_ctlr_matches_baseopts()
mm/debug_vm_pgtable: remove pte entry from the page table
mm/pgtable: define pte_index so that preprocessor could recognize it
mm/kmemleak: avoid scanning potential huge holes
block: bio-integrity: Advance seed correctly for larger interval sizes
dma-buf: heaps: Fix potential spectre v1 gadget
IB/hfi1: Fix AIP early init panic
Revert "fbcon: Disable accelerated scrolling"
fbcon: Add option to enable legacy hardware acceleration
mptcp: fix msk traversal in mptcp_nl_cmd_set_flags()
Revert "ASoC: mediatek: Check for error clk pointer"
KVM: arm64: Avoid consuming a stale esr value when SError occur
KVM: arm64: Stop handle_exit() from handling HVC twice when an SError occurs
RDMA/cma: Use correct address when leaving multicast group
RDMA/ucma: Protect mc during concurrent multicast leaves
RDMA/siw: Fix refcounting leak in siw_create_qp()
IB/rdmavt: Validate remote_addr during loopback atomic tests
RDMA/siw: Fix broken RDMA Read Fence/Resume logic.
RDMA/mlx4: Don't continue event handler after memory allocation failure
ALSA: usb-audio: initialize variables that could ignore errors
ALSA: hda: Fix signedness of sscanf() arguments
ALSA: hda: Skip codec shutdown in case the codec is not registered
iommu/vt-d: Fix potential memory leak in intel_setup_irq_remapping()
iommu/amd: Fix loop timeout issue in iommu_ga_log_enable()
spi: bcm-qspi: check for valid cs before applying chip select
spi: mediatek: Avoid NULL pointer crash in interrupt
spi: meson-spicc: add IRQ check in meson_spicc_probe
spi: uniphier: fix reference count leak in uniphier_spi_probe()
IB/hfi1: Fix tstats alloc and dealloc
IB/cm: Release previously acquired reference counter in the cm_id_priv
net: ieee802154: hwsim: Ensure proper channel selection at probe time
net: ieee802154: mcr20a: Fix lifs/sifs periods
net: ieee802154: ca8210: Stop leaking skb's
netfilter: nft_reject_bridge: Fix for missing reply from prerouting
net: ieee802154: Return meaningful error codes from the netlink helpers
net/smc: Forward wakeup to smc socket waitqueue after fallback
net: stmmac: dwmac-visconti: No change to ETHER_CLOCK_SEL for unexpected speed request.
net: stmmac: properly handle with runtime pm in stmmac_dvr_remove()
net: macsec: Fix offload support for NETDEV_UNREGISTER event
net: macsec: Verify that send_sci is on when setting Tx sci explicitly
net: stmmac: dump gmac4 DMA registers correctly
net: stmmac: ensure PTP time register reads are consistent
drm/kmb: Fix for build errors with Warray-bounds
drm/i915/overlay: Prevent divide by zero bugs in scaling
drm/amd: avoid suspend on dGPUs w/ s2idle support when runtime PM enabled
ASoC: fsl: Add missing error handling in pcm030_fabric_probe
ASoC: xilinx: xlnx_formatter_pcm: Make buffer bytes multiple of period bytes
ASoC: simple-card: fix probe failure on platform component
ASoC: cpcap: Check for NULL pointer after calling of_get_child_by_name
ASoC: max9759: fix underflow in speaker_gain_control_put()
ASoC: codecs: wcd938x: fix incorrect used of portid
ASoC: codecs: lpass-rx-macro: fix sidetone register offsets
ASoC: codecs: wcd938x: fix return value of mixer put function
pinctrl: sunxi: Fix H616 I2S3 pin data
pinctrl: intel: Fix a glitch when updating IRQ flags on a preconfigured line
pinctrl: intel: fix unexpected interrupt
pinctrl: bcm2835: Fix a few error paths
scsi: bnx2fc: Make bnx2fc_recv_frame() mp safe
nfsd: nfsd4_setclientid_confirm mistakenly expires confirmed client.
gve: fix the wrong AdminQ buffer queue index check
bpf: Use VM_MAP instead of VM_ALLOC for ringbuf
selftests/exec: Remove pipe from TEST_GEN_FILES
selftests: futex: Use variable MAKE instead of make
tools/resolve_btfids: Do not print any commands when building silently
e1000e: Separate ADP board type from TGP
rtc: cmos: Evaluate century appropriate
kvm: add guest_state_{enter,exit}_irqoff()
kvm/arm64: rework guest entry logic
perf: Copy perf_event_attr::sig_data on modification
perf stat: Fix display of grouped aliased events
perf/x86/intel/pt: Fix crash with stop filters in single-range mode
x86/perf: Default set FREEZE_ON_SMI for all
EDAC/altera: Fix deferred probing
EDAC/xgene: Fix deferred probing
ext4: prevent used blocks from being allocated during fast commit replay
ext4: modify the logic of ext4_mb_new_blocks_simple
ext4: fix error handling in ext4_restore_inline_data()
ext4: fix error handling in ext4_fc_record_modified_inode()
ext4: fix incorrect type issue during replay_del_range
net: dsa: mt7530: make NET_DSA_MT7530 select MEDIATEK_GE_PHY
cgroup/cpuset: Fix "suspicious RCU usage" lockdep warning
tools include UAPI: Sync sound/asound.h copy with the kernel sources
gpio: idt3243x: Fix an ignored error return from platform_get_irq()
gpio: mpc8xxx: Fix an ignored error return from platform_get_irq()
selftests: nft_concat_range: add test for reload with no element add/del
selftests: netfilter: check stateless nat udp checksum fixup
Linux 5.15.22
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I9143b858b768a8497c1df9440a74d8c105c32271
This commit is contained in:
@@ -311,27 +311,6 @@ Contact: Daniel Vetter, Noralf Tronnes
|
|||||||
|
|
||||||
Level: Advanced
|
Level: Advanced
|
||||||
|
|
||||||
Garbage collect fbdev scrolling acceleration
|
|
||||||
--------------------------------------------
|
|
||||||
|
|
||||||
Scroll acceleration is disabled in fbcon by hard-wiring p->scrollmode =
|
|
||||||
SCROLL_REDRAW. There's a ton of code this will allow us to remove:
|
|
||||||
|
|
||||||
- lots of code in fbcon.c
|
|
||||||
|
|
||||||
- a bunch of the hooks in fbcon_ops, maybe the remaining hooks could be called
|
|
||||||
directly instead of the function table (with a switch on p->rotate)
|
|
||||||
|
|
||||||
- fb_copyarea is unused after this, and can be deleted from all drivers
|
|
||||||
|
|
||||||
Note that not all acceleration code can be deleted, since clearing and cursor
|
|
||||||
support is still accelerated, which might be good candidates for further
|
|
||||||
deletion projects.
|
|
||||||
|
|
||||||
Contact: Daniel Vetter
|
|
||||||
|
|
||||||
Level: Intermediate
|
|
||||||
|
|
||||||
idr_init_base()
|
idr_init_base()
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
|
|||||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 21
|
SUBLEVEL = 22
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Trick or Treat
|
NAME = Trick or Treat
|
||||||
|
|
||||||
|
|||||||
@@ -936,6 +936,24 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
|
|||||||
xfer_to_guest_mode_work_pending();
|
xfer_to_guest_mode_work_pending();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Actually run the vCPU, entering an RCU extended quiescent state (EQS) while
|
||||||
|
* the vCPU is running.
|
||||||
|
*
|
||||||
|
* This must be noinstr as instrumentation may make use of RCU, and this is not
|
||||||
|
* safe during the EQS.
|
||||||
|
*/
|
||||||
|
static int noinstr kvm_arm_vcpu_enter_exit(struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
guest_state_enter_irqoff();
|
||||||
|
ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);
|
||||||
|
guest_state_exit_irqoff();
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
|
* kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
|
||||||
* @vcpu: The VCPU pointer
|
* @vcpu: The VCPU pointer
|
||||||
@@ -1026,9 +1044,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
|
|||||||
* Enter the guest
|
* Enter the guest
|
||||||
*/
|
*/
|
||||||
trace_kvm_entry(*vcpu_pc(vcpu));
|
trace_kvm_entry(*vcpu_pc(vcpu));
|
||||||
guest_enter_irqoff();
|
guest_timing_enter_irqoff();
|
||||||
|
|
||||||
ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);
|
ret = kvm_arm_vcpu_enter_exit(vcpu);
|
||||||
|
|
||||||
vcpu->mode = OUTSIDE_GUEST_MODE;
|
vcpu->mode = OUTSIDE_GUEST_MODE;
|
||||||
vcpu->stat.exits++;
|
vcpu->stat.exits++;
|
||||||
@@ -1063,26 +1081,23 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
|
|||||||
kvm_arch_vcpu_ctxsync_fp(vcpu);
|
kvm_arch_vcpu_ctxsync_fp(vcpu);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We may have taken a host interrupt in HYP mode (ie
|
* We must ensure that any pending interrupts are taken before
|
||||||
* while executing the guest). This interrupt is still
|
* we exit guest timing so that timer ticks are accounted as
|
||||||
* pending, as we haven't serviced it yet!
|
* guest time. Transiently unmask interrupts so that any
|
||||||
|
* pending interrupts are taken.
|
||||||
*
|
*
|
||||||
* We're now back in SVC mode, with interrupts
|
* Per ARM DDI 0487G.b section D1.13.4, an ISB (or other
|
||||||
* disabled. Enabling the interrupts now will have
|
* context synchronization event) is necessary to ensure that
|
||||||
* the effect of taking the interrupt again, in SVC
|
* pending interrupts are taken.
|
||||||
* mode this time.
|
|
||||||
*/
|
*/
|
||||||
local_irq_enable();
|
local_irq_enable();
|
||||||
|
isb();
|
||||||
|
local_irq_disable();
|
||||||
|
|
||||||
|
guest_timing_exit_irqoff();
|
||||||
|
|
||||||
|
local_irq_enable();
|
||||||
|
|
||||||
/*
|
|
||||||
* We do local_irq_enable() before calling guest_exit() so
|
|
||||||
* that if a timer interrupt hits while running the guest we
|
|
||||||
* account that tick as being spent in the guest. We enable
|
|
||||||
* preemption after calling guest_exit() so that if we get
|
|
||||||
* preempted we make sure ticks after that is not counted as
|
|
||||||
* guest time.
|
|
||||||
*/
|
|
||||||
guest_exit();
|
|
||||||
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
|
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
|
||||||
|
|
||||||
/* Exit types that need handling before we can be preempted */
|
/* Exit types that need handling before we can be preempted */
|
||||||
|
|||||||
@@ -241,6 +241,14 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
|
|||||||
{
|
{
|
||||||
struct kvm_run *run = vcpu->run;
|
struct kvm_run *run = vcpu->run;
|
||||||
|
|
||||||
|
if (ARM_SERROR_PENDING(exception_index)) {
|
||||||
|
/*
|
||||||
|
* The SError is handled by handle_exit_early(). If the guest
|
||||||
|
* survives it will re-execute the original instruction.
|
||||||
|
*/
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
exception_index = ARM_EXCEPTION_CODE(exception_index);
|
exception_index = ARM_EXCEPTION_CODE(exception_index);
|
||||||
|
|
||||||
switch (exception_index) {
|
switch (exception_index) {
|
||||||
|
|||||||
@@ -420,7 +420,8 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
|
|||||||
if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
|
if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
|
||||||
vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
|
vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
|
||||||
|
|
||||||
if (ARM_SERROR_PENDING(*exit_code)) {
|
if (ARM_SERROR_PENDING(*exit_code) &&
|
||||||
|
ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) {
|
||||||
u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
|
u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -4654,6 +4654,19 @@ static __initconst const struct x86_pmu intel_pmu = {
|
|||||||
.lbr_read = intel_pmu_lbr_read_64,
|
.lbr_read = intel_pmu_lbr_read_64,
|
||||||
.lbr_save = intel_pmu_lbr_save,
|
.lbr_save = intel_pmu_lbr_save,
|
||||||
.lbr_restore = intel_pmu_lbr_restore,
|
.lbr_restore = intel_pmu_lbr_restore,
|
||||||
|
|
||||||
|
/*
|
||||||
|
* SMM has access to all 4 rings and while traditionally SMM code only
|
||||||
|
* ran in CPL0, 2021-era firmware is starting to make use of CPL3 in SMM.
|
||||||
|
*
|
||||||
|
* Since the EVENTSEL.{USR,OS} CPL filtering makes no distinction
|
||||||
|
* between SMM or not, this results in what should be pure userspace
|
||||||
|
* counters including SMM data.
|
||||||
|
*
|
||||||
|
* This is a clear privilege issue, therefore globally disable
|
||||||
|
* counting SMM by default.
|
||||||
|
*/
|
||||||
|
.attr_freeze_on_smi = 1,
|
||||||
};
|
};
|
||||||
|
|
||||||
static __init void intel_clovertown_quirk(void)
|
static __init void intel_clovertown_quirk(void)
|
||||||
|
|||||||
@@ -897,8 +897,9 @@ static void pt_handle_status(struct pt *pt)
|
|||||||
* means we are already losing data; need to let the decoder
|
* means we are already losing data; need to let the decoder
|
||||||
* know.
|
* know.
|
||||||
*/
|
*/
|
||||||
if (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
|
if (!buf->single &&
|
||||||
buf->output_off == pt_buffer_region_size(buf)) {
|
(!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries) ||
|
||||||
|
buf->output_off == pt_buffer_region_size(buf))) {
|
||||||
perf_aux_output_flag(&pt->handle,
|
perf_aux_output_flag(&pt->handle,
|
||||||
PERF_AUX_FLAG_TRUNCATED);
|
PERF_AUX_FLAG_TRUNCATED);
|
||||||
advance++;
|
advance++;
|
||||||
|
|||||||
@@ -373,7 +373,7 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done)
|
|||||||
struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
|
struct blk_integrity *bi = blk_get_integrity(bio->bi_bdev->bd_disk);
|
||||||
unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9);
|
unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9);
|
||||||
|
|
||||||
bip->bip_iter.bi_sector += bytes_done >> 9;
|
bip->bip_iter.bi_sector += bio_integrity_intervals(bi, bytes_done >> 9);
|
||||||
bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes);
|
bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -14,6 +14,7 @@
|
|||||||
#include <linux/xarray.h>
|
#include <linux/xarray.h>
|
||||||
#include <linux/list.h>
|
#include <linux/list.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
#include <linux/nospec.h>
|
||||||
#include <linux/uaccess.h>
|
#include <linux/uaccess.h>
|
||||||
#include <linux/syscalls.h>
|
#include <linux/syscalls.h>
|
||||||
#include <linux/dma-heap.h>
|
#include <linux/dma-heap.h>
|
||||||
@@ -172,6 +173,7 @@ static long dma_heap_ioctl(struct file *file, unsigned int ucmd,
|
|||||||
if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
|
if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
nr = array_index_nospec(nr, ARRAY_SIZE(dma_heap_ioctl_cmds));
|
||||||
/* Get the kernel ioctl cmd that matches */
|
/* Get the kernel ioctl cmd that matches */
|
||||||
kcmd = dma_heap_ioctl_cmds[nr];
|
kcmd = dma_heap_ioctl_cmds[nr];
|
||||||
|
|
||||||
|
|||||||
@@ -350,7 +350,7 @@ static int altr_sdram_probe(struct platform_device *pdev)
|
|||||||
if (irq < 0) {
|
if (irq < 0) {
|
||||||
edac_printk(KERN_ERR, EDAC_MC,
|
edac_printk(KERN_ERR, EDAC_MC,
|
||||||
"No irq %d in DT\n", irq);
|
"No irq %d in DT\n", irq);
|
||||||
return -ENODEV;
|
return irq;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Arria10 has a 2nd IRQ */
|
/* Arria10 has a 2nd IRQ */
|
||||||
|
|||||||
@@ -1919,7 +1919,7 @@ static int xgene_edac_probe(struct platform_device *pdev)
|
|||||||
irq = platform_get_irq_optional(pdev, i);
|
irq = platform_get_irq_optional(pdev, i);
|
||||||
if (irq < 0) {
|
if (irq < 0) {
|
||||||
dev_err(&pdev->dev, "No IRQ resource\n");
|
dev_err(&pdev->dev, "No IRQ resource\n");
|
||||||
rc = -EINVAL;
|
rc = irq;
|
||||||
goto out_err;
|
goto out_err;
|
||||||
}
|
}
|
||||||
rc = devm_request_irq(&pdev->dev, irq,
|
rc = devm_request_irq(&pdev->dev, irq,
|
||||||
|
|||||||
@@ -132,7 +132,7 @@ static int idt_gpio_probe(struct platform_device *pdev)
|
|||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
struct gpio_irq_chip *girq;
|
struct gpio_irq_chip *girq;
|
||||||
struct idt_gpio_ctrl *ctrl;
|
struct idt_gpio_ctrl *ctrl;
|
||||||
unsigned int parent_irq;
|
int parent_irq;
|
||||||
int ngpios;
|
int ngpios;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ struct mpc8xxx_gpio_chip {
|
|||||||
unsigned offset, int value);
|
unsigned offset, int value);
|
||||||
|
|
||||||
struct irq_domain *irq;
|
struct irq_domain *irq;
|
||||||
unsigned int irqn;
|
int irqn;
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -1504,8 +1504,7 @@ static int amdgpu_pmops_prepare(struct device *dev)
|
|||||||
* DPM_FLAG_SMART_SUSPEND works properly
|
* DPM_FLAG_SMART_SUSPEND works properly
|
||||||
*/
|
*/
|
||||||
if (amdgpu_device_supports_boco(drm_dev))
|
if (amdgpu_device_supports_boco(drm_dev))
|
||||||
return pm_runtime_suspended(dev) &&
|
return pm_runtime_suspended(dev);
|
||||||
pm_suspend_via_firmware();
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -324,38 +324,38 @@ static struct clk_bw_params dcn31_bw_params = {
|
|||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct wm_table ddr4_wm_table = {
|
static struct wm_table ddr5_wm_table = {
|
||||||
.entries = {
|
.entries = {
|
||||||
{
|
{
|
||||||
.wm_inst = WM_A,
|
.wm_inst = WM_A,
|
||||||
.wm_type = WM_TYPE_PSTATE_CHG,
|
.wm_type = WM_TYPE_PSTATE_CHG,
|
||||||
.pstate_latency_us = 11.72,
|
.pstate_latency_us = 11.72,
|
||||||
.sr_exit_time_us = 6.09,
|
.sr_exit_time_us = 9,
|
||||||
.sr_enter_plus_exit_time_us = 7.14,
|
.sr_enter_plus_exit_time_us = 11,
|
||||||
.valid = true,
|
.valid = true,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.wm_inst = WM_B,
|
.wm_inst = WM_B,
|
||||||
.wm_type = WM_TYPE_PSTATE_CHG,
|
.wm_type = WM_TYPE_PSTATE_CHG,
|
||||||
.pstate_latency_us = 11.72,
|
.pstate_latency_us = 11.72,
|
||||||
.sr_exit_time_us = 10.12,
|
.sr_exit_time_us = 9,
|
||||||
.sr_enter_plus_exit_time_us = 11.48,
|
.sr_enter_plus_exit_time_us = 11,
|
||||||
.valid = true,
|
.valid = true,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.wm_inst = WM_C,
|
.wm_inst = WM_C,
|
||||||
.wm_type = WM_TYPE_PSTATE_CHG,
|
.wm_type = WM_TYPE_PSTATE_CHG,
|
||||||
.pstate_latency_us = 11.72,
|
.pstate_latency_us = 11.72,
|
||||||
.sr_exit_time_us = 10.12,
|
.sr_exit_time_us = 9,
|
||||||
.sr_enter_plus_exit_time_us = 11.48,
|
.sr_enter_plus_exit_time_us = 11,
|
||||||
.valid = true,
|
.valid = true,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.wm_inst = WM_D,
|
.wm_inst = WM_D,
|
||||||
.wm_type = WM_TYPE_PSTATE_CHG,
|
.wm_type = WM_TYPE_PSTATE_CHG,
|
||||||
.pstate_latency_us = 11.72,
|
.pstate_latency_us = 11.72,
|
||||||
.sr_exit_time_us = 10.12,
|
.sr_exit_time_us = 9,
|
||||||
.sr_enter_plus_exit_time_us = 11.48,
|
.sr_enter_plus_exit_time_us = 11,
|
||||||
.valid = true,
|
.valid = true,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -683,7 +683,7 @@ void dcn31_clk_mgr_construct(
|
|||||||
if (ctx->dc_bios->integrated_info->memory_type == LpDdr5MemType) {
|
if (ctx->dc_bios->integrated_info->memory_type == LpDdr5MemType) {
|
||||||
dcn31_bw_params.wm_table = lpddr5_wm_table;
|
dcn31_bw_params.wm_table = lpddr5_wm_table;
|
||||||
} else {
|
} else {
|
||||||
dcn31_bw_params.wm_table = ddr4_wm_table;
|
dcn31_bw_params.wm_table = ddr5_wm_table;
|
||||||
}
|
}
|
||||||
/* Saved clocks configured at boot for debug purposes */
|
/* Saved clocks configured at boot for debug purposes */
|
||||||
dcn31_dump_clk_registers(&clk_mgr->base.base.boot_snapshot, &clk_mgr->base.base, &log_info);
|
dcn31_dump_clk_registers(&clk_mgr->base.base.boot_snapshot, &clk_mgr->base.base, &log_info);
|
||||||
|
|||||||
@@ -3913,6 +3913,26 @@ static bool retrieve_link_cap(struct dc_link *link)
|
|||||||
dp_hw_fw_revision.ieee_fw_rev,
|
dp_hw_fw_revision.ieee_fw_rev,
|
||||||
sizeof(dp_hw_fw_revision.ieee_fw_rev));
|
sizeof(dp_hw_fw_revision.ieee_fw_rev));
|
||||||
|
|
||||||
|
/* Quirk for Apple MBP 2018 15" Retina panels: wrong DP_MAX_LINK_RATE */
|
||||||
|
{
|
||||||
|
uint8_t str_mbp_2018[] = { 101, 68, 21, 103, 98, 97 };
|
||||||
|
uint8_t fwrev_mbp_2018[] = { 7, 4 };
|
||||||
|
uint8_t fwrev_mbp_2018_vega[] = { 8, 4 };
|
||||||
|
|
||||||
|
/* We also check for the firmware revision as 16,1 models have an
|
||||||
|
* identical device id and are incorrectly quirked otherwise.
|
||||||
|
*/
|
||||||
|
if ((link->dpcd_caps.sink_dev_id == 0x0010fa) &&
|
||||||
|
!memcmp(link->dpcd_caps.sink_dev_id_str, str_mbp_2018,
|
||||||
|
sizeof(str_mbp_2018)) &&
|
||||||
|
(!memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018,
|
||||||
|
sizeof(fwrev_mbp_2018)) ||
|
||||||
|
!memcmp(link->dpcd_caps.sink_fw_revision, fwrev_mbp_2018_vega,
|
||||||
|
sizeof(fwrev_mbp_2018_vega)))) {
|
||||||
|
link->reported_link_cap.link_rate = LINK_RATE_RBR2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
memset(&link->dpcd_caps.dsc_caps, '\0',
|
memset(&link->dpcd_caps.dsc_caps, '\0',
|
||||||
sizeof(link->dpcd_caps.dsc_caps));
|
sizeof(link->dpcd_caps.dsc_caps));
|
||||||
memset(&link->dpcd_caps.fec_cap, '\0', sizeof(link->dpcd_caps.fec_cap));
|
memset(&link->dpcd_caps.fec_cap, '\0', sizeof(link->dpcd_caps.fec_cap));
|
||||||
|
|||||||
@@ -3728,14 +3728,14 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
|
|||||||
|
|
||||||
static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
|
static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
|
||||||
{
|
{
|
||||||
struct smu_table_context *table_context = &smu->smu_table;
|
uint16_t *mgpu_fan_boost_limit_rpm;
|
||||||
PPTable_t *smc_pptable = table_context->driver_pptable;
|
|
||||||
|
|
||||||
|
GET_PPTABLE_MEMBER(MGpuFanBoostLimitRpm, &mgpu_fan_boost_limit_rpm);
|
||||||
/*
|
/*
|
||||||
* Skip the MGpuFanBoost setting for those ASICs
|
* Skip the MGpuFanBoost setting for those ASICs
|
||||||
* which do not support it
|
* which do not support it
|
||||||
*/
|
*/
|
||||||
if (!smc_pptable->MGpuFanBoostLimitRpm)
|
if (*mgpu_fan_boost_limit_rpm == 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return smu_cmn_send_smc_msg_with_param(smu,
|
return smu_cmn_send_smc_msg_with_param(smu,
|
||||||
|
|||||||
@@ -959,6 +959,9 @@ static int check_overlay_dst(struct intel_overlay *overlay,
|
|||||||
const struct intel_crtc_state *pipe_config =
|
const struct intel_crtc_state *pipe_config =
|
||||||
overlay->crtc->config;
|
overlay->crtc->config;
|
||||||
|
|
||||||
|
if (rec->dst_height == 0 || rec->dst_width == 0)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (rec->dst_x < pipe_config->pipe_src_w &&
|
if (rec->dst_x < pipe_config->pipe_src_w &&
|
||||||
rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w &&
|
rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w &&
|
||||||
rec->dst_y < pipe_config->pipe_src_h &&
|
rec->dst_y < pipe_config->pipe_src_h &&
|
||||||
|
|||||||
@@ -291,10 +291,11 @@ static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
|||||||
static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port)
|
||||||
{
|
{
|
||||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||||
|
enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);
|
||||||
struct intel_uncore *uncore = &i915->uncore;
|
struct intel_uncore *uncore = &i915->uncore;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
val = intel_uncore_read(uncore, TCSS_DDI_STATUS(dig_port->tc_phy_fia_idx));
|
val = intel_uncore_read(uncore, TCSS_DDI_STATUS(tc_port));
|
||||||
if (val == 0xffffffff) {
|
if (val == 0xffffffff) {
|
||||||
drm_dbg_kms(&i915->drm,
|
drm_dbg_kms(&i915->drm,
|
||||||
"Port %s: PHY in TCCOLD, assuming not complete\n",
|
"Port %s: PHY in TCCOLD, assuming not complete\n",
|
||||||
|
|||||||
@@ -865,7 +865,7 @@ static const struct intel_device_info jsl_info = {
|
|||||||
}, \
|
}, \
|
||||||
TGL_CURSOR_OFFSETS, \
|
TGL_CURSOR_OFFSETS, \
|
||||||
.has_global_mocs = 1, \
|
.has_global_mocs = 1, \
|
||||||
.display.has_dsb = 1
|
.display.has_dsb = 0 /* FIXME: LUT load is broken with DSB */
|
||||||
|
|
||||||
static const struct intel_device_info tgl_info = {
|
static const struct intel_device_info tgl_info = {
|
||||||
GEN12_FEATURES,
|
GEN12_FEATURES,
|
||||||
|
|||||||
@@ -158,12 +158,6 @@ static void kmb_plane_atomic_disable(struct drm_plane *plane,
|
|||||||
case LAYER_1:
|
case LAYER_1:
|
||||||
kmb->plane_status[plane_id].ctrl = LCD_CTRL_VL2_ENABLE;
|
kmb->plane_status[plane_id].ctrl = LCD_CTRL_VL2_ENABLE;
|
||||||
break;
|
break;
|
||||||
case LAYER_2:
|
|
||||||
kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL1_ENABLE;
|
|
||||||
break;
|
|
||||||
case LAYER_3:
|
|
||||||
kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL2_ENABLE;
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
kmb->plane_status[plane_id].disable = true;
|
kmb->plane_status[plane_id].disable = true;
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
|
|||||||
*addr += bios->imaged_addr;
|
*addr += bios->imaged_addr;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (unlikely(*addr + size >= bios->size)) {
|
if (unlikely(*addr + size > bios->size)) {
|
||||||
nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr);
|
nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3322,7 +3322,7 @@ static int cm_lap_handler(struct cm_work *work)
|
|||||||
ret = cm_init_av_by_path(param->alternate_path, NULL, &alt_av);
|
ret = cm_init_av_by_path(param->alternate_path, NULL, &alt_av);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
rdma_destroy_ah_attr(&ah_attr);
|
rdma_destroy_ah_attr(&ah_attr);
|
||||||
return -EINVAL;
|
goto deref;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irq(&cm_id_priv->lock);
|
spin_lock_irq(&cm_id_priv->lock);
|
||||||
|
|||||||
@@ -67,8 +67,8 @@ static const char * const cma_events[] = {
|
|||||||
[RDMA_CM_EVENT_TIMEWAIT_EXIT] = "timewait exit",
|
[RDMA_CM_EVENT_TIMEWAIT_EXIT] = "timewait exit",
|
||||||
};
|
};
|
||||||
|
|
||||||
static void cma_set_mgid(struct rdma_id_private *id_priv, struct sockaddr *addr,
|
static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
|
||||||
union ib_gid *mgid);
|
enum ib_gid_type gid_type);
|
||||||
|
|
||||||
const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
|
const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
|
||||||
{
|
{
|
||||||
@@ -1844,17 +1844,19 @@ static void destroy_mc(struct rdma_id_private *id_priv,
|
|||||||
if (dev_addr->bound_dev_if)
|
if (dev_addr->bound_dev_if)
|
||||||
ndev = dev_get_by_index(dev_addr->net,
|
ndev = dev_get_by_index(dev_addr->net,
|
||||||
dev_addr->bound_dev_if);
|
dev_addr->bound_dev_if);
|
||||||
if (ndev) {
|
if (ndev && !send_only) {
|
||||||
|
enum ib_gid_type gid_type;
|
||||||
union ib_gid mgid;
|
union ib_gid mgid;
|
||||||
|
|
||||||
cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
|
gid_type = id_priv->cma_dev->default_gid_type
|
||||||
&mgid);
|
[id_priv->id.port_num -
|
||||||
|
rdma_start_port(
|
||||||
if (!send_only)
|
id_priv->cma_dev->device)];
|
||||||
cma_igmp_send(ndev, &mgid, false);
|
cma_iboe_set_mgid((struct sockaddr *)&mc->addr, &mgid,
|
||||||
|
gid_type);
|
||||||
dev_put(ndev);
|
cma_igmp_send(ndev, &mgid, false);
|
||||||
}
|
}
|
||||||
|
dev_put(ndev);
|
||||||
|
|
||||||
cancel_work_sync(&mc->iboe_join.work);
|
cancel_work_sync(&mc->iboe_join.work);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -95,6 +95,7 @@ struct ucma_context {
|
|||||||
u64 uid;
|
u64 uid;
|
||||||
|
|
||||||
struct list_head list;
|
struct list_head list;
|
||||||
|
struct list_head mc_list;
|
||||||
struct work_struct close_work;
|
struct work_struct close_work;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -105,6 +106,7 @@ struct ucma_multicast {
|
|||||||
|
|
||||||
u64 uid;
|
u64 uid;
|
||||||
u8 join_state;
|
u8 join_state;
|
||||||
|
struct list_head list;
|
||||||
struct sockaddr_storage addr;
|
struct sockaddr_storage addr;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -198,6 +200,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
|
|||||||
|
|
||||||
INIT_WORK(&ctx->close_work, ucma_close_id);
|
INIT_WORK(&ctx->close_work, ucma_close_id);
|
||||||
init_completion(&ctx->comp);
|
init_completion(&ctx->comp);
|
||||||
|
INIT_LIST_HEAD(&ctx->mc_list);
|
||||||
/* So list_del() will work if we don't do ucma_finish_ctx() */
|
/* So list_del() will work if we don't do ucma_finish_ctx() */
|
||||||
INIT_LIST_HEAD(&ctx->list);
|
INIT_LIST_HEAD(&ctx->list);
|
||||||
ctx->file = file;
|
ctx->file = file;
|
||||||
@@ -484,19 +487,19 @@ err1:
|
|||||||
|
|
||||||
static void ucma_cleanup_multicast(struct ucma_context *ctx)
|
static void ucma_cleanup_multicast(struct ucma_context *ctx)
|
||||||
{
|
{
|
||||||
struct ucma_multicast *mc;
|
struct ucma_multicast *mc, *tmp;
|
||||||
unsigned long index;
|
|
||||||
|
|
||||||
xa_for_each(&multicast_table, index, mc) {
|
xa_lock(&multicast_table);
|
||||||
if (mc->ctx != ctx)
|
list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) {
|
||||||
continue;
|
list_del(&mc->list);
|
||||||
/*
|
/*
|
||||||
* At this point mc->ctx->ref is 0 so the mc cannot leave the
|
* At this point mc->ctx->ref is 0 so the mc cannot leave the
|
||||||
* lock on the reader and this is enough serialization
|
* lock on the reader and this is enough serialization
|
||||||
*/
|
*/
|
||||||
xa_erase(&multicast_table, index);
|
__xa_erase(&multicast_table, mc->id);
|
||||||
kfree(mc);
|
kfree(mc);
|
||||||
}
|
}
|
||||||
|
xa_unlock(&multicast_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
|
static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
|
||||||
@@ -1469,12 +1472,16 @@ static ssize_t ucma_process_join(struct ucma_file *file,
|
|||||||
mc->uid = cmd->uid;
|
mc->uid = cmd->uid;
|
||||||
memcpy(&mc->addr, addr, cmd->addr_size);
|
memcpy(&mc->addr, addr, cmd->addr_size);
|
||||||
|
|
||||||
if (xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
|
xa_lock(&multicast_table);
|
||||||
|
if (__xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b,
|
||||||
GFP_KERNEL)) {
|
GFP_KERNEL)) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto err_free_mc;
|
goto err_free_mc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
list_add_tail(&mc->list, &ctx->mc_list);
|
||||||
|
xa_unlock(&multicast_table);
|
||||||
|
|
||||||
mutex_lock(&ctx->mutex);
|
mutex_lock(&ctx->mutex);
|
||||||
ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr,
|
ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr,
|
||||||
join_state, mc);
|
join_state, mc);
|
||||||
@@ -1500,8 +1507,11 @@ err_leave_multicast:
|
|||||||
mutex_unlock(&ctx->mutex);
|
mutex_unlock(&ctx->mutex);
|
||||||
ucma_cleanup_mc_events(mc);
|
ucma_cleanup_mc_events(mc);
|
||||||
err_xa_erase:
|
err_xa_erase:
|
||||||
xa_erase(&multicast_table, mc->id);
|
xa_lock(&multicast_table);
|
||||||
|
list_del(&mc->list);
|
||||||
|
__xa_erase(&multicast_table, mc->id);
|
||||||
err_free_mc:
|
err_free_mc:
|
||||||
|
xa_unlock(&multicast_table);
|
||||||
kfree(mc);
|
kfree(mc);
|
||||||
err_put_ctx:
|
err_put_ctx:
|
||||||
ucma_put_ctx(ctx);
|
ucma_put_ctx(ctx);
|
||||||
@@ -1569,15 +1579,17 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file,
|
|||||||
mc = ERR_PTR(-EINVAL);
|
mc = ERR_PTR(-EINVAL);
|
||||||
else if (!refcount_inc_not_zero(&mc->ctx->ref))
|
else if (!refcount_inc_not_zero(&mc->ctx->ref))
|
||||||
mc = ERR_PTR(-ENXIO);
|
mc = ERR_PTR(-ENXIO);
|
||||||
else
|
|
||||||
__xa_erase(&multicast_table, mc->id);
|
|
||||||
xa_unlock(&multicast_table);
|
|
||||||
|
|
||||||
if (IS_ERR(mc)) {
|
if (IS_ERR(mc)) {
|
||||||
|
xa_unlock(&multicast_table);
|
||||||
ret = PTR_ERR(mc);
|
ret = PTR_ERR(mc);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
list_del(&mc->list);
|
||||||
|
__xa_erase(&multicast_table, mc->id);
|
||||||
|
xa_unlock(&multicast_table);
|
||||||
|
|
||||||
mutex_lock(&mc->ctx->mutex);
|
mutex_lock(&mc->ctx->mutex);
|
||||||
rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
|
rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
|
||||||
mutex_unlock(&mc->ctx->mutex);
|
mutex_unlock(&mc->ctx->mutex);
|
||||||
|
|||||||
@@ -22,26 +22,35 @@ static int hfi1_ipoib_dev_init(struct net_device *dev)
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
|
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
|
||||||
|
if (!dev->tstats)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
ret = priv->netdev_ops->ndo_init(dev);
|
ret = priv->netdev_ops->ndo_init(dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto out_ret;
|
||||||
|
|
||||||
ret = hfi1_netdev_add_data(priv->dd,
|
ret = hfi1_netdev_add_data(priv->dd,
|
||||||
qpn_from_mac(priv->netdev->dev_addr),
|
qpn_from_mac(priv->netdev->dev_addr),
|
||||||
dev);
|
dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
priv->netdev_ops->ndo_uninit(dev);
|
priv->netdev_ops->ndo_uninit(dev);
|
||||||
return ret;
|
goto out_ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
out_ret:
|
||||||
|
free_percpu(dev->tstats);
|
||||||
|
dev->tstats = NULL;
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hfi1_ipoib_dev_uninit(struct net_device *dev)
|
static void hfi1_ipoib_dev_uninit(struct net_device *dev)
|
||||||
{
|
{
|
||||||
struct hfi1_ipoib_dev_priv *priv = hfi1_ipoib_priv(dev);
|
struct hfi1_ipoib_dev_priv *priv = hfi1_ipoib_priv(dev);
|
||||||
|
|
||||||
|
free_percpu(dev->tstats);
|
||||||
|
dev->tstats = NULL;
|
||||||
|
|
||||||
hfi1_netdev_remove_data(priv->dd, qpn_from_mac(priv->netdev->dev_addr));
|
hfi1_netdev_remove_data(priv->dd, qpn_from_mac(priv->netdev->dev_addr));
|
||||||
|
|
||||||
priv->netdev_ops->ndo_uninit(dev);
|
priv->netdev_ops->ndo_uninit(dev);
|
||||||
@@ -166,12 +175,7 @@ static void hfi1_ipoib_netdev_dtor(struct net_device *dev)
|
|||||||
hfi1_ipoib_rxq_deinit(priv->netdev);
|
hfi1_ipoib_rxq_deinit(priv->netdev);
|
||||||
|
|
||||||
free_percpu(dev->tstats);
|
free_percpu(dev->tstats);
|
||||||
}
|
dev->tstats = NULL;
|
||||||
|
|
||||||
static void hfi1_ipoib_free_rdma_netdev(struct net_device *dev)
|
|
||||||
{
|
|
||||||
hfi1_ipoib_netdev_dtor(dev);
|
|
||||||
free_netdev(dev);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void hfi1_ipoib_set_id(struct net_device *dev, int id)
|
static void hfi1_ipoib_set_id(struct net_device *dev, int id)
|
||||||
@@ -211,24 +215,23 @@ static int hfi1_ipoib_setup_rn(struct ib_device *device,
|
|||||||
priv->port_num = port_num;
|
priv->port_num = port_num;
|
||||||
priv->netdev_ops = netdev->netdev_ops;
|
priv->netdev_ops = netdev->netdev_ops;
|
||||||
|
|
||||||
netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
|
|
||||||
|
|
||||||
ib_query_pkey(device, port_num, priv->pkey_index, &priv->pkey);
|
ib_query_pkey(device, port_num, priv->pkey_index, &priv->pkey);
|
||||||
|
|
||||||
rc = hfi1_ipoib_txreq_init(priv);
|
rc = hfi1_ipoib_txreq_init(priv);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
dd_dev_err(dd, "IPoIB netdev TX init - failed(%d)\n", rc);
|
dd_dev_err(dd, "IPoIB netdev TX init - failed(%d)\n", rc);
|
||||||
hfi1_ipoib_free_rdma_netdev(netdev);
|
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
rc = hfi1_ipoib_rxq_init(netdev);
|
rc = hfi1_ipoib_rxq_init(netdev);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
dd_dev_err(dd, "IPoIB netdev RX init - failed(%d)\n", rc);
|
dd_dev_err(dd, "IPoIB netdev RX init - failed(%d)\n", rc);
|
||||||
hfi1_ipoib_free_rdma_netdev(netdev);
|
hfi1_ipoib_txreq_deinit(priv);
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
netdev->netdev_ops = &hfi1_ipoib_netdev_ops;
|
||||||
|
|
||||||
netdev->priv_destructor = hfi1_ipoib_netdev_dtor;
|
netdev->priv_destructor = hfi1_ipoib_netdev_dtor;
|
||||||
netdev->needs_free_netdev = true;
|
netdev->needs_free_netdev = true;
|
||||||
|
|
||||||
|
|||||||
@@ -3249,7 +3249,7 @@ static void mlx4_ib_event(struct mlx4_dev *dev, void *ibdev_ptr,
|
|||||||
case MLX4_DEV_EVENT_PORT_MGMT_CHANGE:
|
case MLX4_DEV_EVENT_PORT_MGMT_CHANGE:
|
||||||
ew = kmalloc(sizeof *ew, GFP_ATOMIC);
|
ew = kmalloc(sizeof *ew, GFP_ATOMIC);
|
||||||
if (!ew)
|
if (!ew)
|
||||||
break;
|
return;
|
||||||
|
|
||||||
INIT_WORK(&ew->work, handle_port_mgmt_change_event);
|
INIT_WORK(&ew->work, handle_port_mgmt_change_event);
|
||||||
memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
|
memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
|
||||||
|
|||||||
@@ -3073,6 +3073,8 @@ do_write:
|
|||||||
case IB_WR_ATOMIC_FETCH_AND_ADD:
|
case IB_WR_ATOMIC_FETCH_AND_ADD:
|
||||||
if (unlikely(!(qp->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC)))
|
if (unlikely(!(qp->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC)))
|
||||||
goto inv_err;
|
goto inv_err;
|
||||||
|
if (unlikely(wqe->atomic_wr.remote_addr & (sizeof(u64) - 1)))
|
||||||
|
goto inv_err;
|
||||||
if (unlikely(!rvt_rkey_ok(qp, &qp->r_sge.sge, sizeof(u64),
|
if (unlikely(!rvt_rkey_ok(qp, &qp->r_sge.sge, sizeof(u64),
|
||||||
wqe->atomic_wr.remote_addr,
|
wqe->atomic_wr.remote_addr,
|
||||||
wqe->atomic_wr.rkey,
|
wqe->atomic_wr.rkey,
|
||||||
|
|||||||
@@ -644,14 +644,9 @@ static inline struct siw_sqe *orq_get_current(struct siw_qp *qp)
|
|||||||
return &qp->orq[qp->orq_get % qp->attrs.orq_size];
|
return &qp->orq[qp->orq_get % qp->attrs.orq_size];
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline struct siw_sqe *orq_get_tail(struct siw_qp *qp)
|
|
||||||
{
|
|
||||||
return &qp->orq[qp->orq_put % qp->attrs.orq_size];
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline struct siw_sqe *orq_get_free(struct siw_qp *qp)
|
static inline struct siw_sqe *orq_get_free(struct siw_qp *qp)
|
||||||
{
|
{
|
||||||
struct siw_sqe *orq_e = orq_get_tail(qp);
|
struct siw_sqe *orq_e = &qp->orq[qp->orq_put % qp->attrs.orq_size];
|
||||||
|
|
||||||
if (READ_ONCE(orq_e->flags) == 0)
|
if (READ_ONCE(orq_e->flags) == 0)
|
||||||
return orq_e;
|
return orq_e;
|
||||||
|
|||||||
@@ -1153,11 +1153,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
|
|||||||
|
|
||||||
spin_lock_irqsave(&qp->orq_lock, flags);
|
spin_lock_irqsave(&qp->orq_lock, flags);
|
||||||
|
|
||||||
rreq = orq_get_current(qp);
|
|
||||||
|
|
||||||
/* free current orq entry */
|
/* free current orq entry */
|
||||||
|
rreq = orq_get_current(qp);
|
||||||
WRITE_ONCE(rreq->flags, 0);
|
WRITE_ONCE(rreq->flags, 0);
|
||||||
|
|
||||||
|
qp->orq_get++;
|
||||||
|
|
||||||
if (qp->tx_ctx.orq_fence) {
|
if (qp->tx_ctx.orq_fence) {
|
||||||
if (unlikely(tx_waiting->wr_status != SIW_WR_QUEUED)) {
|
if (unlikely(tx_waiting->wr_status != SIW_WR_QUEUED)) {
|
||||||
pr_warn("siw: [QP %u]: fence resume: bad status %d\n",
|
pr_warn("siw: [QP %u]: fence resume: bad status %d\n",
|
||||||
@@ -1165,10 +1166,12 @@ static int siw_check_tx_fence(struct siw_qp *qp)
|
|||||||
rv = -EPROTO;
|
rv = -EPROTO;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
/* resume SQ processing */
|
/* resume SQ processing, if possible */
|
||||||
if (tx_waiting->sqe.opcode == SIW_OP_READ ||
|
if (tx_waiting->sqe.opcode == SIW_OP_READ ||
|
||||||
tx_waiting->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
|
tx_waiting->sqe.opcode == SIW_OP_READ_LOCAL_INV) {
|
||||||
rreq = orq_get_tail(qp);
|
|
||||||
|
/* SQ processing was stopped because of a full ORQ */
|
||||||
|
rreq = orq_get_free(qp);
|
||||||
if (unlikely(!rreq)) {
|
if (unlikely(!rreq)) {
|
||||||
pr_warn("siw: [QP %u]: no ORQE\n", qp_id(qp));
|
pr_warn("siw: [QP %u]: no ORQE\n", qp_id(qp));
|
||||||
rv = -EPROTO;
|
rv = -EPROTO;
|
||||||
@@ -1181,15 +1184,14 @@ static int siw_check_tx_fence(struct siw_qp *qp)
|
|||||||
resume_tx = 1;
|
resume_tx = 1;
|
||||||
|
|
||||||
} else if (siw_orq_empty(qp)) {
|
} else if (siw_orq_empty(qp)) {
|
||||||
|
/*
|
||||||
|
* SQ processing was stopped by fenced work request.
|
||||||
|
* Resume since all previous Read's are now completed.
|
||||||
|
*/
|
||||||
qp->tx_ctx.orq_fence = 0;
|
qp->tx_ctx.orq_fence = 0;
|
||||||
resume_tx = 1;
|
resume_tx = 1;
|
||||||
} else {
|
|
||||||
pr_warn("siw: [QP %u]: fence resume: orq idx: %d:%d\n",
|
|
||||||
qp_id(qp), qp->orq_get, qp->orq_put);
|
|
||||||
rv = -EPROTO;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
qp->orq_get++;
|
|
||||||
out:
|
out:
|
||||||
spin_unlock_irqrestore(&qp->orq_lock, flags);
|
spin_unlock_irqrestore(&qp->orq_lock, flags);
|
||||||
|
|
||||||
|
|||||||
@@ -311,7 +311,8 @@ int siw_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
|
|||||||
|
|
||||||
if (atomic_inc_return(&sdev->num_qp) > SIW_MAX_QP) {
|
if (atomic_inc_return(&sdev->num_qp) > SIW_MAX_QP) {
|
||||||
siw_dbg(base_dev, "too many QP's\n");
|
siw_dbg(base_dev, "too many QP's\n");
|
||||||
return -ENOMEM;
|
rv = -ENOMEM;
|
||||||
|
goto err_atomic;
|
||||||
}
|
}
|
||||||
if (attrs->qp_type != IB_QPT_RC) {
|
if (attrs->qp_type != IB_QPT_RC) {
|
||||||
siw_dbg(base_dev, "only RC QP's supported\n");
|
siw_dbg(base_dev, "only RC QP's supported\n");
|
||||||
|
|||||||
@@ -21,6 +21,7 @@
|
|||||||
#include <linux/export.h>
|
#include <linux/export.h>
|
||||||
#include <linux/kmemleak.h>
|
#include <linux/kmemleak.h>
|
||||||
#include <linux/mem_encrypt.h>
|
#include <linux/mem_encrypt.h>
|
||||||
|
#include <linux/iopoll.h>
|
||||||
#include <asm/pci-direct.h>
|
#include <asm/pci-direct.h>
|
||||||
#include <asm/iommu.h>
|
#include <asm/iommu.h>
|
||||||
#include <asm/apic.h>
|
#include <asm/apic.h>
|
||||||
@@ -832,6 +833,7 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
|
|||||||
status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
|
status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
|
||||||
if (status & (MMIO_STATUS_GALOG_RUN_MASK))
|
if (status & (MMIO_STATUS_GALOG_RUN_MASK))
|
||||||
break;
|
break;
|
||||||
|
udelay(10);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (WARN_ON(i >= LOOP_TIMEOUT))
|
if (WARN_ON(i >= LOOP_TIMEOUT))
|
||||||
|
|||||||
@@ -569,9 +569,8 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
|||||||
fn, &intel_ir_domain_ops,
|
fn, &intel_ir_domain_ops,
|
||||||
iommu);
|
iommu);
|
||||||
if (!iommu->ir_domain) {
|
if (!iommu->ir_domain) {
|
||||||
irq_domain_free_fwnode(fn);
|
|
||||||
pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
|
pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
|
||||||
goto out_free_bitmap;
|
goto out_free_fwnode;
|
||||||
}
|
}
|
||||||
iommu->ir_msi_domain =
|
iommu->ir_msi_domain =
|
||||||
arch_create_remap_msi_irq_domain(iommu->ir_domain,
|
arch_create_remap_msi_irq_domain(iommu->ir_domain,
|
||||||
@@ -595,7 +594,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
|||||||
|
|
||||||
if (dmar_enable_qi(iommu)) {
|
if (dmar_enable_qi(iommu)) {
|
||||||
pr_err("Failed to enable queued invalidation\n");
|
pr_err("Failed to enable queued invalidation\n");
|
||||||
goto out_free_bitmap;
|
goto out_free_ir_domain;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -619,6 +618,14 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
out_free_ir_domain:
|
||||||
|
if (iommu->ir_msi_domain)
|
||||||
|
irq_domain_remove(iommu->ir_msi_domain);
|
||||||
|
iommu->ir_msi_domain = NULL;
|
||||||
|
irq_domain_remove(iommu->ir_domain);
|
||||||
|
iommu->ir_domain = NULL;
|
||||||
|
out_free_fwnode:
|
||||||
|
irq_domain_free_fwnode(fn);
|
||||||
out_free_bitmap:
|
out_free_bitmap:
|
||||||
bitmap_free(bitmap);
|
bitmap_free(bitmap);
|
||||||
out_free_pages:
|
out_free_pages:
|
||||||
|
|||||||
@@ -36,6 +36,7 @@ config NET_DSA_LANTIQ_GSWIP
|
|||||||
config NET_DSA_MT7530
|
config NET_DSA_MT7530
|
||||||
tristate "MediaTek MT753x and MT7621 Ethernet switch support"
|
tristate "MediaTek MT753x and MT7621 Ethernet switch support"
|
||||||
select NET_DSA_TAG_MTK
|
select NET_DSA_TAG_MTK
|
||||||
|
select MEDIATEK_GE_PHY
|
||||||
help
|
help
|
||||||
This enables support for the MediaTek MT7530, MT7531, and MT7621
|
This enables support for the MediaTek MT7530, MT7531, and MT7621
|
||||||
Ethernet switch chips.
|
Ethernet switch chips.
|
||||||
|
|||||||
@@ -281,7 +281,7 @@ static int gve_adminq_parse_err(struct gve_priv *priv, u32 status)
|
|||||||
*/
|
*/
|
||||||
static int gve_adminq_kick_and_wait(struct gve_priv *priv)
|
static int gve_adminq_kick_and_wait(struct gve_priv *priv)
|
||||||
{
|
{
|
||||||
u32 tail, head;
|
int tail, head;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
|
tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
|
||||||
|
|||||||
@@ -114,7 +114,8 @@ enum e1000_boards {
|
|||||||
board_pch_lpt,
|
board_pch_lpt,
|
||||||
board_pch_spt,
|
board_pch_spt,
|
||||||
board_pch_cnp,
|
board_pch_cnp,
|
||||||
board_pch_tgp
|
board_pch_tgp,
|
||||||
|
board_pch_adp
|
||||||
};
|
};
|
||||||
|
|
||||||
struct e1000_ps_page {
|
struct e1000_ps_page {
|
||||||
@@ -501,6 +502,7 @@ extern const struct e1000_info e1000_pch_lpt_info;
|
|||||||
extern const struct e1000_info e1000_pch_spt_info;
|
extern const struct e1000_info e1000_pch_spt_info;
|
||||||
extern const struct e1000_info e1000_pch_cnp_info;
|
extern const struct e1000_info e1000_pch_cnp_info;
|
||||||
extern const struct e1000_info e1000_pch_tgp_info;
|
extern const struct e1000_info e1000_pch_tgp_info;
|
||||||
|
extern const struct e1000_info e1000_pch_adp_info;
|
||||||
extern const struct e1000_info e1000_es2_info;
|
extern const struct e1000_info e1000_es2_info;
|
||||||
|
|
||||||
void e1000e_ptp_init(struct e1000_adapter *adapter);
|
void e1000e_ptp_init(struct e1000_adapter *adapter);
|
||||||
|
|||||||
@@ -6021,3 +6021,23 @@ const struct e1000_info e1000_pch_tgp_info = {
|
|||||||
.phy_ops = &ich8_phy_ops,
|
.phy_ops = &ich8_phy_ops,
|
||||||
.nvm_ops = &spt_nvm_ops,
|
.nvm_ops = &spt_nvm_ops,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const struct e1000_info e1000_pch_adp_info = {
|
||||||
|
.mac = e1000_pch_adp,
|
||||||
|
.flags = FLAG_IS_ICH
|
||||||
|
| FLAG_HAS_WOL
|
||||||
|
| FLAG_HAS_HW_TIMESTAMP
|
||||||
|
| FLAG_HAS_CTRLEXT_ON_LOAD
|
||||||
|
| FLAG_HAS_AMT
|
||||||
|
| FLAG_HAS_FLASH
|
||||||
|
| FLAG_HAS_JUMBO_FRAMES
|
||||||
|
| FLAG_APME_IN_WUC,
|
||||||
|
.flags2 = FLAG2_HAS_PHY_STATS
|
||||||
|
| FLAG2_HAS_EEE,
|
||||||
|
.pba = 26,
|
||||||
|
.max_hw_frame_size = 9022,
|
||||||
|
.get_variants = e1000_get_variants_ich8lan,
|
||||||
|
.mac_ops = &ich8_mac_ops,
|
||||||
|
.phy_ops = &ich8_phy_ops,
|
||||||
|
.nvm_ops = &spt_nvm_ops,
|
||||||
|
};
|
||||||
|
|||||||
@@ -52,6 +52,7 @@ static const struct e1000_info *e1000_info_tbl[] = {
|
|||||||
[board_pch_spt] = &e1000_pch_spt_info,
|
[board_pch_spt] = &e1000_pch_spt_info,
|
||||||
[board_pch_cnp] = &e1000_pch_cnp_info,
|
[board_pch_cnp] = &e1000_pch_cnp_info,
|
||||||
[board_pch_tgp] = &e1000_pch_tgp_info,
|
[board_pch_tgp] = &e1000_pch_tgp_info,
|
||||||
|
[board_pch_adp] = &e1000_pch_adp_info,
|
||||||
};
|
};
|
||||||
|
|
||||||
struct e1000_reg_info {
|
struct e1000_reg_info {
|
||||||
@@ -7905,22 +7906,22 @@ static const struct pci_device_id e1000_pci_tbl[] = {
|
|||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_adp },
|
||||||
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_tgp },
|
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_adp },
|
||||||
|
|
||||||
{ 0, 0, 0, 0, 0, 0, 0 } /* terminate list */
|
{ 0, 0, 0, 0, 0, 0, 0 } /* terminate list */
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -49,13 +49,15 @@ struct visconti_eth {
|
|||||||
void __iomem *reg;
|
void __iomem *reg;
|
||||||
u32 phy_intf_sel;
|
u32 phy_intf_sel;
|
||||||
struct clk *phy_ref_clk;
|
struct clk *phy_ref_clk;
|
||||||
|
struct device *dev;
|
||||||
spinlock_t lock; /* lock to protect register update */
|
spinlock_t lock; /* lock to protect register update */
|
||||||
};
|
};
|
||||||
|
|
||||||
static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
|
static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
|
||||||
{
|
{
|
||||||
struct visconti_eth *dwmac = priv;
|
struct visconti_eth *dwmac = priv;
|
||||||
unsigned int val, clk_sel_val;
|
struct net_device *netdev = dev_get_drvdata(dwmac->dev);
|
||||||
|
unsigned int val, clk_sel_val = 0;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&dwmac->lock, flags);
|
spin_lock_irqsave(&dwmac->lock, flags);
|
||||||
@@ -85,7 +87,9 @@ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
|
|||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
/* No bit control */
|
/* No bit control */
|
||||||
break;
|
netdev_err(netdev, "Unsupported speed request (%d)", speed);
|
||||||
|
spin_unlock_irqrestore(&dwmac->lock, flags);
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
writel(val, dwmac->reg + MAC_CTRL_REG);
|
writel(val, dwmac->reg + MAC_CTRL_REG);
|
||||||
@@ -230,6 +234,7 @@ static int visconti_eth_dwmac_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
spin_lock_init(&dwmac->lock);
|
spin_lock_init(&dwmac->lock);
|
||||||
dwmac->reg = stmmac_res.addr;
|
dwmac->reg = stmmac_res.addr;
|
||||||
|
dwmac->dev = &pdev->dev;
|
||||||
plat_dat->bsp_priv = dwmac;
|
plat_dat->bsp_priv = dwmac;
|
||||||
plat_dat->fix_mac_speed = visconti_eth_fix_mac_speed;
|
plat_dat->fix_mac_speed = visconti_eth_fix_mac_speed;
|
||||||
|
|
||||||
|
|||||||
@@ -150,6 +150,7 @@
|
|||||||
|
|
||||||
#define NUM_DWMAC100_DMA_REGS 9
|
#define NUM_DWMAC100_DMA_REGS 9
|
||||||
#define NUM_DWMAC1000_DMA_REGS 23
|
#define NUM_DWMAC1000_DMA_REGS 23
|
||||||
|
#define NUM_DWMAC4_DMA_REGS 27
|
||||||
|
|
||||||
void dwmac_enable_dma_transmission(void __iomem *ioaddr);
|
void dwmac_enable_dma_transmission(void __iomem *ioaddr);
|
||||||
void dwmac_enable_dma_irq(void __iomem *ioaddr, u32 chan, bool rx, bool tx);
|
void dwmac_enable_dma_irq(void __iomem *ioaddr, u32 chan, bool rx, bool tx);
|
||||||
|
|||||||
@@ -21,10 +21,18 @@
|
|||||||
#include "dwxgmac2.h"
|
#include "dwxgmac2.h"
|
||||||
|
|
||||||
#define REG_SPACE_SIZE 0x1060
|
#define REG_SPACE_SIZE 0x1060
|
||||||
|
#define GMAC4_REG_SPACE_SIZE 0x116C
|
||||||
#define MAC100_ETHTOOL_NAME "st_mac100"
|
#define MAC100_ETHTOOL_NAME "st_mac100"
|
||||||
#define GMAC_ETHTOOL_NAME "st_gmac"
|
#define GMAC_ETHTOOL_NAME "st_gmac"
|
||||||
#define XGMAC_ETHTOOL_NAME "st_xgmac"
|
#define XGMAC_ETHTOOL_NAME "st_xgmac"
|
||||||
|
|
||||||
|
/* Same as DMA_CHAN_BASE_ADDR defined in dwmac4_dma.h
|
||||||
|
*
|
||||||
|
* It is here because dwmac_dma.h and dwmac4_dam.h can not be included at the
|
||||||
|
* same time due to the conflicting macro names.
|
||||||
|
*/
|
||||||
|
#define GMAC4_DMA_CHAN_BASE_ADDR 0x00001100
|
||||||
|
|
||||||
#define ETHTOOL_DMA_OFFSET 55
|
#define ETHTOOL_DMA_OFFSET 55
|
||||||
|
|
||||||
struct stmmac_stats {
|
struct stmmac_stats {
|
||||||
@@ -435,6 +443,8 @@ static int stmmac_ethtool_get_regs_len(struct net_device *dev)
|
|||||||
|
|
||||||
if (priv->plat->has_xgmac)
|
if (priv->plat->has_xgmac)
|
||||||
return XGMAC_REGSIZE * 4;
|
return XGMAC_REGSIZE * 4;
|
||||||
|
else if (priv->plat->has_gmac4)
|
||||||
|
return GMAC4_REG_SPACE_SIZE;
|
||||||
return REG_SPACE_SIZE;
|
return REG_SPACE_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -447,8 +457,13 @@ static void stmmac_ethtool_gregs(struct net_device *dev,
|
|||||||
stmmac_dump_mac_regs(priv, priv->hw, reg_space);
|
stmmac_dump_mac_regs(priv, priv->hw, reg_space);
|
||||||
stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space);
|
stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space);
|
||||||
|
|
||||||
if (!priv->plat->has_xgmac) {
|
/* Copy DMA registers to where ethtool expects them */
|
||||||
/* Copy DMA registers to where ethtool expects them */
|
if (priv->plat->has_gmac4) {
|
||||||
|
/* GMAC4 dumps its DMA registers at its DMA_CHAN_BASE_ADDR */
|
||||||
|
memcpy(®_space[ETHTOOL_DMA_OFFSET],
|
||||||
|
®_space[GMAC4_DMA_CHAN_BASE_ADDR / 4],
|
||||||
|
NUM_DWMAC4_DMA_REGS * 4);
|
||||||
|
} else if (!priv->plat->has_xgmac) {
|
||||||
memcpy(®_space[ETHTOOL_DMA_OFFSET],
|
memcpy(®_space[ETHTOOL_DMA_OFFSET],
|
||||||
®_space[DMA_BUS_MODE / 4],
|
®_space[DMA_BUS_MODE / 4],
|
||||||
NUM_DWMAC1000_DMA_REGS * 4);
|
NUM_DWMAC1000_DMA_REGS * 4);
|
||||||
|
|||||||
@@ -145,15 +145,20 @@ static int adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,
|
|||||||
|
|
||||||
static void get_systime(void __iomem *ioaddr, u64 *systime)
|
static void get_systime(void __iomem *ioaddr, u64 *systime)
|
||||||
{
|
{
|
||||||
u64 ns;
|
u64 ns, sec0, sec1;
|
||||||
|
|
||||||
/* Get the TSSS value */
|
/* Get the TSS value */
|
||||||
ns = readl(ioaddr + PTP_STNSR);
|
sec1 = readl_relaxed(ioaddr + PTP_STSR);
|
||||||
/* Get the TSS and convert sec time value to nanosecond */
|
do {
|
||||||
ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;
|
sec0 = sec1;
|
||||||
|
/* Get the TSSS value */
|
||||||
|
ns = readl_relaxed(ioaddr + PTP_STNSR);
|
||||||
|
/* Get the TSS value */
|
||||||
|
sec1 = readl_relaxed(ioaddr + PTP_STSR);
|
||||||
|
} while (sec0 != sec1);
|
||||||
|
|
||||||
if (systime)
|
if (systime)
|
||||||
*systime = ns;
|
*systime = ns + (sec1 * 1000000000ULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
|
static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
|
||||||
|
|||||||
@@ -7120,6 +7120,10 @@ int stmmac_dvr_remove(struct device *dev)
|
|||||||
|
|
||||||
netdev_info(priv->dev, "%s: removing driver", __func__);
|
netdev_info(priv->dev, "%s: removing driver", __func__);
|
||||||
|
|
||||||
|
pm_runtime_get_sync(dev);
|
||||||
|
pm_runtime_disable(dev);
|
||||||
|
pm_runtime_put_noidle(dev);
|
||||||
|
|
||||||
stmmac_stop_all_dma(priv);
|
stmmac_stop_all_dma(priv);
|
||||||
stmmac_mac_set(priv, priv->ioaddr, false);
|
stmmac_mac_set(priv, priv->ioaddr, false);
|
||||||
netif_carrier_off(ndev);
|
netif_carrier_off(ndev);
|
||||||
@@ -7138,8 +7142,6 @@ int stmmac_dvr_remove(struct device *dev)
|
|||||||
if (priv->plat->stmmac_rst)
|
if (priv->plat->stmmac_rst)
|
||||||
reset_control_assert(priv->plat->stmmac_rst);
|
reset_control_assert(priv->plat->stmmac_rst);
|
||||||
reset_control_assert(priv->plat->stmmac_ahb_rst);
|
reset_control_assert(priv->plat->stmmac_ahb_rst);
|
||||||
pm_runtime_put(dev);
|
|
||||||
pm_runtime_disable(dev);
|
|
||||||
if (priv->hw->pcs != STMMAC_PCS_TBI &&
|
if (priv->hw->pcs != STMMAC_PCS_TBI &&
|
||||||
priv->hw->pcs != STMMAC_PCS_RTBI)
|
priv->hw->pcs != STMMAC_PCS_RTBI)
|
||||||
stmmac_mdio_unregister(ndev);
|
stmmac_mdio_unregister(ndev);
|
||||||
|
|||||||
@@ -1771,6 +1771,7 @@ static int ca8210_async_xmit_complete(
|
|||||||
status
|
status
|
||||||
);
|
);
|
||||||
if (status != MAC_TRANSACTION_OVERFLOW) {
|
if (status != MAC_TRANSACTION_OVERFLOW) {
|
||||||
|
dev_kfree_skb_any(priv->tx_skb);
|
||||||
ieee802154_wake_queue(priv->hw);
|
ieee802154_wake_queue(priv->hw);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -786,6 +786,7 @@ static int hwsim_add_one(struct genl_info *info, struct device *dev,
|
|||||||
goto err_pib;
|
goto err_pib;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pib->channel = 13;
|
||||||
rcu_assign_pointer(phy->pib, pib);
|
rcu_assign_pointer(phy->pib, pib);
|
||||||
phy->idx = idx;
|
phy->idx = idx;
|
||||||
INIT_LIST_HEAD(&phy->edges);
|
INIT_LIST_HEAD(&phy->edges);
|
||||||
|
|||||||
@@ -976,8 +976,8 @@ static void mcr20a_hw_setup(struct mcr20a_local *lp)
|
|||||||
dev_dbg(printdev(lp), "%s\n", __func__);
|
dev_dbg(printdev(lp), "%s\n", __func__);
|
||||||
|
|
||||||
phy->symbol_duration = 16;
|
phy->symbol_duration = 16;
|
||||||
phy->lifs_period = 40;
|
phy->lifs_period = 40 * phy->symbol_duration;
|
||||||
phy->sifs_period = 12;
|
phy->sifs_period = 12 * phy->symbol_duration;
|
||||||
|
|
||||||
hw->flags = IEEE802154_HW_TX_OMIT_CKSUM |
|
hw->flags = IEEE802154_HW_TX_OMIT_CKSUM |
|
||||||
IEEE802154_HW_AFILT |
|
IEEE802154_HW_AFILT |
|
||||||
|
|||||||
@@ -3870,6 +3870,18 @@ static void macsec_common_dellink(struct net_device *dev, struct list_head *head
|
|||||||
struct macsec_dev *macsec = macsec_priv(dev);
|
struct macsec_dev *macsec = macsec_priv(dev);
|
||||||
struct net_device *real_dev = macsec->real_dev;
|
struct net_device *real_dev = macsec->real_dev;
|
||||||
|
|
||||||
|
/* If h/w offloading is available, propagate to the device */
|
||||||
|
if (macsec_is_offloaded(macsec)) {
|
||||||
|
const struct macsec_ops *ops;
|
||||||
|
struct macsec_context ctx;
|
||||||
|
|
||||||
|
ops = macsec_get_ops(netdev_priv(dev), &ctx);
|
||||||
|
if (ops) {
|
||||||
|
ctx.secy = &macsec->secy;
|
||||||
|
macsec_offload(ops->mdo_del_secy, &ctx);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
unregister_netdevice_queue(dev, head);
|
unregister_netdevice_queue(dev, head);
|
||||||
list_del_rcu(&macsec->secys);
|
list_del_rcu(&macsec->secys);
|
||||||
macsec_del_dev(macsec);
|
macsec_del_dev(macsec);
|
||||||
@@ -3884,18 +3896,6 @@ static void macsec_dellink(struct net_device *dev, struct list_head *head)
|
|||||||
struct net_device *real_dev = macsec->real_dev;
|
struct net_device *real_dev = macsec->real_dev;
|
||||||
struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
|
struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
|
||||||
|
|
||||||
/* If h/w offloading is available, propagate to the device */
|
|
||||||
if (macsec_is_offloaded(macsec)) {
|
|
||||||
const struct macsec_ops *ops;
|
|
||||||
struct macsec_context ctx;
|
|
||||||
|
|
||||||
ops = macsec_get_ops(netdev_priv(dev), &ctx);
|
|
||||||
if (ops) {
|
|
||||||
ctx.secy = &macsec->secy;
|
|
||||||
macsec_offload(ops->mdo_del_secy, &ctx);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
macsec_common_dellink(dev, head);
|
macsec_common_dellink(dev, head);
|
||||||
|
|
||||||
if (list_empty(&rxd->secys)) {
|
if (list_empty(&rxd->secys)) {
|
||||||
@@ -4018,6 +4018,15 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
|
|||||||
!macsec_check_offload(macsec->offload, macsec))
|
!macsec_check_offload(macsec->offload, macsec))
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
|
/* send_sci must be set to true when transmit sci explicitly is set */
|
||||||
|
if ((data && data[IFLA_MACSEC_SCI]) &&
|
||||||
|
(data && data[IFLA_MACSEC_INC_SCI])) {
|
||||||
|
u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]);
|
||||||
|
|
||||||
|
if (!send_sci)
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
if (data && data[IFLA_MACSEC_ICV_LEN])
|
if (data && data[IFLA_MACSEC_ICV_LEN])
|
||||||
icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
|
icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
|
||||||
mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
|
mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
|
||||||
|
|||||||
@@ -169,6 +169,7 @@ nvmf_ctlr_matches_baseopts(struct nvme_ctrl *ctrl,
|
|||||||
struct nvmf_ctrl_options *opts)
|
struct nvmf_ctrl_options *opts)
|
||||||
{
|
{
|
||||||
if (ctrl->state == NVME_CTRL_DELETING ||
|
if (ctrl->state == NVME_CTRL_DELETING ||
|
||||||
|
ctrl->state == NVME_CTRL_DELETING_NOIO ||
|
||||||
ctrl->state == NVME_CTRL_DEAD ||
|
ctrl->state == NVME_CTRL_DEAD ||
|
||||||
strcmp(opts->subsysnqn, ctrl->opts->subsysnqn) ||
|
strcmp(opts->subsysnqn, ctrl->opts->subsysnqn) ||
|
||||||
strcmp(opts->host->nqn, ctrl->opts->host->nqn) ||
|
strcmp(opts->host->nqn, ctrl->opts->host->nqn) ||
|
||||||
|
|||||||
@@ -1263,16 +1263,18 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
|
|||||||
sizeof(*girq->parents),
|
sizeof(*girq->parents),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!girq->parents) {
|
if (!girq->parents) {
|
||||||
pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
|
err = -ENOMEM;
|
||||||
return -ENOMEM;
|
goto out_remove;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (is_7211) {
|
if (is_7211) {
|
||||||
pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,
|
pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,
|
||||||
sizeof(*pc->wake_irq),
|
sizeof(*pc->wake_irq),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!pc->wake_irq)
|
if (!pc->wake_irq) {
|
||||||
return -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
goto out_remove;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -1300,8 +1302,10 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
len = strlen(dev_name(pc->dev)) + 16;
|
len = strlen(dev_name(pc->dev)) + 16;
|
||||||
name = devm_kzalloc(pc->dev, len, GFP_KERNEL);
|
name = devm_kzalloc(pc->dev, len, GFP_KERNEL);
|
||||||
if (!name)
|
if (!name) {
|
||||||
return -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
goto out_remove;
|
||||||
|
}
|
||||||
|
|
||||||
snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i);
|
snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i);
|
||||||
|
|
||||||
@@ -1320,11 +1324,14 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
|
|||||||
err = gpiochip_add_data(&pc->gpio_chip, pc);
|
err = gpiochip_add_data(&pc->gpio_chip, pc);
|
||||||
if (err) {
|
if (err) {
|
||||||
dev_err(dev, "could not add GPIO chip\n");
|
dev_err(dev, "could not add GPIO chip\n");
|
||||||
pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
|
goto out_remove;
|
||||||
return err;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
out_remove:
|
||||||
|
pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct platform_driver bcm2835_pinctrl_driver = {
|
static struct platform_driver bcm2835_pinctrl_driver = {
|
||||||
|
|||||||
@@ -451,8 +451,8 @@ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
|
|||||||
value &= ~PADCFG0_PMODE_MASK;
|
value &= ~PADCFG0_PMODE_MASK;
|
||||||
value |= PADCFG0_PMODE_GPIO;
|
value |= PADCFG0_PMODE_GPIO;
|
||||||
|
|
||||||
/* Disable input and output buffers */
|
/* Disable TX buffer and enable RX (this will be input) */
|
||||||
value |= PADCFG0_GPIORXDIS;
|
value &= ~PADCFG0_GPIORXDIS;
|
||||||
value |= PADCFG0_GPIOTXDIS;
|
value |= PADCFG0_GPIOTXDIS;
|
||||||
|
|
||||||
/* Disable SCI/SMI/NMI generation */
|
/* Disable SCI/SMI/NMI generation */
|
||||||
@@ -497,9 +497,6 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
|
|||||||
|
|
||||||
intel_gpio_set_gpio_mode(padcfg0);
|
intel_gpio_set_gpio_mode(padcfg0);
|
||||||
|
|
||||||
/* Disable TX buffer and enable RX (this will be input) */
|
|
||||||
__intel_gpio_set_direction(padcfg0, true);
|
|
||||||
|
|
||||||
raw_spin_unlock_irqrestore(&pctrl->lock, flags);
|
raw_spin_unlock_irqrestore(&pctrl->lock, flags);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@@ -1115,9 +1112,6 @@ static int intel_gpio_irq_type(struct irq_data *d, unsigned int type)
|
|||||||
|
|
||||||
intel_gpio_set_gpio_mode(reg);
|
intel_gpio_set_gpio_mode(reg);
|
||||||
|
|
||||||
/* Disable TX buffer and enable RX (this will be input) */
|
|
||||||
__intel_gpio_set_direction(reg, true);
|
|
||||||
|
|
||||||
value = readl(reg);
|
value = readl(reg);
|
||||||
|
|
||||||
value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
|
value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
|
||||||
@@ -1216,6 +1210,39 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
|
|||||||
return IRQ_RETVAL(ret);
|
return IRQ_RETVAL(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 0; i < pctrl->ncommunities; i++) {
|
||||||
|
const struct intel_community *community;
|
||||||
|
void __iomem *base;
|
||||||
|
unsigned int gpp;
|
||||||
|
|
||||||
|
community = &pctrl->communities[i];
|
||||||
|
base = community->regs;
|
||||||
|
|
||||||
|
for (gpp = 0; gpp < community->ngpps; gpp++) {
|
||||||
|
/* Mask and clear all interrupts */
|
||||||
|
writel(0, base + community->ie_offset + gpp * 4);
|
||||||
|
writel(0xffff, base + community->is_offset + gpp * 4);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int intel_gpio_irq_init_hw(struct gpio_chip *gc)
|
||||||
|
{
|
||||||
|
struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Make sure the interrupt lines are in a proper state before
|
||||||
|
* further configuration.
|
||||||
|
*/
|
||||||
|
intel_gpio_irq_init(pctrl);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl,
|
static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl,
|
||||||
const struct intel_community *community)
|
const struct intel_community *community)
|
||||||
{
|
{
|
||||||
@@ -1320,6 +1347,7 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
|
|||||||
girq->num_parents = 0;
|
girq->num_parents = 0;
|
||||||
girq->default_type = IRQ_TYPE_NONE;
|
girq->default_type = IRQ_TYPE_NONE;
|
||||||
girq->handler = handle_bad_irq;
|
girq->handler = handle_bad_irq;
|
||||||
|
girq->init_hw = intel_gpio_irq_init_hw;
|
||||||
|
|
||||||
ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
|
ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
@@ -1695,26 +1723,6 @@ int intel_pinctrl_suspend_noirq(struct device *dev)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);
|
EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);
|
||||||
|
|
||||||
static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
|
|
||||||
{
|
|
||||||
size_t i;
|
|
||||||
|
|
||||||
for (i = 0; i < pctrl->ncommunities; i++) {
|
|
||||||
const struct intel_community *community;
|
|
||||||
void __iomem *base;
|
|
||||||
unsigned int gpp;
|
|
||||||
|
|
||||||
community = &pctrl->communities[i];
|
|
||||||
base = community->regs;
|
|
||||||
|
|
||||||
for (gpp = 0; gpp < community->ngpps; gpp++) {
|
|
||||||
/* Mask and clear all interrupts */
|
|
||||||
writel(0, base + community->ie_offset + gpp * 4);
|
|
||||||
writel(0xffff, base + community->is_offset + gpp * 4);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value)
|
static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value)
|
||||||
{
|
{
|
||||||
u32 curr, updated;
|
u32 curr, updated;
|
||||||
|
|||||||
@@ -363,16 +363,16 @@ static const struct sunxi_desc_pin h616_pins[] = {
|
|||||||
SUNXI_FUNCTION(0x0, "gpio_in"),
|
SUNXI_FUNCTION(0x0, "gpio_in"),
|
||||||
SUNXI_FUNCTION(0x1, "gpio_out"),
|
SUNXI_FUNCTION(0x1, "gpio_out"),
|
||||||
SUNXI_FUNCTION(0x2, "uart2"), /* CTS */
|
SUNXI_FUNCTION(0x2, "uart2"), /* CTS */
|
||||||
SUNXI_FUNCTION(0x3, "i2s3"), /* DO0 */
|
SUNXI_FUNCTION(0x3, "i2s3_dout0"), /* DO0 */
|
||||||
SUNXI_FUNCTION(0x4, "spi1"), /* MISO */
|
SUNXI_FUNCTION(0x4, "spi1"), /* MISO */
|
||||||
SUNXI_FUNCTION(0x5, "i2s3"), /* DI1 */
|
SUNXI_FUNCTION(0x5, "i2s3_din1"), /* DI1 */
|
||||||
SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 8)), /* PH_EINT8 */
|
SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 8)), /* PH_EINT8 */
|
||||||
SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 9),
|
SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 9),
|
||||||
SUNXI_FUNCTION(0x0, "gpio_in"),
|
SUNXI_FUNCTION(0x0, "gpio_in"),
|
||||||
SUNXI_FUNCTION(0x1, "gpio_out"),
|
SUNXI_FUNCTION(0x1, "gpio_out"),
|
||||||
SUNXI_FUNCTION(0x3, "i2s3"), /* DI0 */
|
SUNXI_FUNCTION(0x3, "i2s3_din0"), /* DI0 */
|
||||||
SUNXI_FUNCTION(0x4, "spi1"), /* CS1 */
|
SUNXI_FUNCTION(0x4, "spi1"), /* CS1 */
|
||||||
SUNXI_FUNCTION(0x3, "i2s3"), /* DO1 */
|
SUNXI_FUNCTION(0x5, "i2s3_dout1"), /* DO1 */
|
||||||
SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 9)), /* PH_EINT9 */
|
SUNXI_FUNCTION_IRQ_BANK(0x6, 6, 9)), /* PH_EINT9 */
|
||||||
SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 10),
|
SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 10),
|
||||||
SUNXI_FUNCTION(0x0, "gpio_in"),
|
SUNXI_FUNCTION(0x0, "gpio_in"),
|
||||||
|
|||||||
@@ -104,7 +104,7 @@ again:
|
|||||||
time->tm_year += real_year - 72;
|
time->tm_year += real_year - 72;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (century > 20)
|
if (century > 19)
|
||||||
time->tm_year += (century - 19) * 100;
|
time->tm_year += (century - 19) * 100;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -508,7 +508,8 @@ static int bnx2fc_l2_rcv_thread(void *arg)
|
|||||||
|
|
||||||
static void bnx2fc_recv_frame(struct sk_buff *skb)
|
static void bnx2fc_recv_frame(struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
u32 fr_len;
|
u64 crc_err;
|
||||||
|
u32 fr_len, fr_crc;
|
||||||
struct fc_lport *lport;
|
struct fc_lport *lport;
|
||||||
struct fcoe_rcv_info *fr;
|
struct fcoe_rcv_info *fr;
|
||||||
struct fc_stats *stats;
|
struct fc_stats *stats;
|
||||||
@@ -542,6 +543,11 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
|
|||||||
skb_pull(skb, sizeof(struct fcoe_hdr));
|
skb_pull(skb, sizeof(struct fcoe_hdr));
|
||||||
fr_len = skb->len - sizeof(struct fcoe_crc_eof);
|
fr_len = skb->len - sizeof(struct fcoe_crc_eof);
|
||||||
|
|
||||||
|
stats = per_cpu_ptr(lport->stats, get_cpu());
|
||||||
|
stats->RxFrames++;
|
||||||
|
stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
|
||||||
|
put_cpu();
|
||||||
|
|
||||||
fp = (struct fc_frame *)skb;
|
fp = (struct fc_frame *)skb;
|
||||||
fc_frame_init(fp);
|
fc_frame_init(fp);
|
||||||
fr_dev(fp) = lport;
|
fr_dev(fp) = lport;
|
||||||
@@ -624,16 +630,15 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
stats = per_cpu_ptr(lport->stats, smp_processor_id());
|
fr_crc = le32_to_cpu(fr_crc(fp));
|
||||||
stats->RxFrames++;
|
|
||||||
stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
|
|
||||||
|
|
||||||
if (le32_to_cpu(fr_crc(fp)) !=
|
if (unlikely(fr_crc != ~crc32(~0, skb->data, fr_len))) {
|
||||||
~crc32(~0, skb->data, fr_len)) {
|
stats = per_cpu_ptr(lport->stats, get_cpu());
|
||||||
if (stats->InvalidCRCCount < 5)
|
crc_err = (stats->InvalidCRCCount++);
|
||||||
|
put_cpu();
|
||||||
|
if (crc_err < 5)
|
||||||
printk(KERN_WARNING PFX "dropping frame with "
|
printk(KERN_WARNING PFX "dropping frame with "
|
||||||
"CRC error\n");
|
"CRC error\n");
|
||||||
stats->InvalidCRCCount++;
|
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -411,17 +411,12 @@ out:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int init_clks(struct platform_device *pdev, struct clk **clk)
|
static void init_clks(struct platform_device *pdev, struct clk **clk)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
|
for (i = CLK_NONE + 1; i < CLK_MAX; i++)
|
||||||
clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
|
clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
|
||||||
if (IS_ERR(clk[i]))
|
|
||||||
return PTR_ERR(clk[i]);
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct scp *init_scp(struct platform_device *pdev,
|
static struct scp *init_scp(struct platform_device *pdev,
|
||||||
@@ -431,7 +426,7 @@ static struct scp *init_scp(struct platform_device *pdev,
|
|||||||
{
|
{
|
||||||
struct genpd_onecell_data *pd_data;
|
struct genpd_onecell_data *pd_data;
|
||||||
struct resource *res;
|
struct resource *res;
|
||||||
int i, j, ret;
|
int i, j;
|
||||||
struct scp *scp;
|
struct scp *scp;
|
||||||
struct clk *clk[CLK_MAX];
|
struct clk *clk[CLK_MAX];
|
||||||
|
|
||||||
@@ -486,9 +481,7 @@ static struct scp *init_scp(struct platform_device *pdev,
|
|||||||
|
|
||||||
pd_data->num_domains = num;
|
pd_data->num_domains = num;
|
||||||
|
|
||||||
ret = init_clks(pdev, clk);
|
init_clks(pdev, clk);
|
||||||
if (ret)
|
|
||||||
return ERR_PTR(ret);
|
|
||||||
|
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
struct scp_domain *scpd = &scp->domains[i];
|
struct scp_domain *scpd = &scp->domains[i];
|
||||||
|
|||||||
@@ -552,7 +552,7 @@ static void bcm_qspi_chip_select(struct bcm_qspi *qspi, int cs)
|
|||||||
u32 rd = 0;
|
u32 rd = 0;
|
||||||
u32 wr = 0;
|
u32 wr = 0;
|
||||||
|
|
||||||
if (qspi->base[CHIP_SELECT]) {
|
if (cs >= 0 && qspi->base[CHIP_SELECT]) {
|
||||||
rd = bcm_qspi_read(qspi, CHIP_SELECT, 0);
|
rd = bcm_qspi_read(qspi, CHIP_SELECT, 0);
|
||||||
wr = (rd & ~0xff) | (1 << cs);
|
wr = (rd & ~0xff) | (1 << cs);
|
||||||
if (rd == wr)
|
if (rd == wr)
|
||||||
|
|||||||
@@ -693,6 +693,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
|
|||||||
writel_relaxed(0, spicc->base + SPICC_INTREG);
|
writel_relaxed(0, spicc->base + SPICC_INTREG);
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
|
if (irq < 0) {
|
||||||
|
ret = irq;
|
||||||
|
goto out_master;
|
||||||
|
}
|
||||||
|
|
||||||
ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,
|
ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,
|
||||||
0, NULL, spicc);
|
0, NULL, spicc);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|||||||
@@ -624,7 +624,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
|
|||||||
else
|
else
|
||||||
mdata->state = MTK_SPI_IDLE;
|
mdata->state = MTK_SPI_IDLE;
|
||||||
|
|
||||||
if (!master->can_dma(master, master->cur_msg->spi, trans)) {
|
if (!master->can_dma(master, NULL, trans)) {
|
||||||
if (trans->rx_buf) {
|
if (trans->rx_buf) {
|
||||||
cnt = mdata->xfer_len / 4;
|
cnt = mdata->xfer_len / 4;
|
||||||
ioread32_rep(mdata->base + SPI_RX_DATA_REG,
|
ioread32_rep(mdata->base + SPI_RX_DATA_REG,
|
||||||
|
|||||||
@@ -688,7 +688,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
|
|||||||
struct resource *res;
|
struct resource *res;
|
||||||
int ret, irq;
|
int ret, irq;
|
||||||
|
|
||||||
ctrl = spi_alloc_master(dev, sizeof(*qspi));
|
ctrl = devm_spi_alloc_master(dev, sizeof(*qspi));
|
||||||
if (!ctrl)
|
if (!ctrl)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@@ -697,58 +697,46 @@ static int stm32_qspi_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");
|
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");
|
||||||
qspi->io_base = devm_ioremap_resource(dev, res);
|
qspi->io_base = devm_ioremap_resource(dev, res);
|
||||||
if (IS_ERR(qspi->io_base)) {
|
if (IS_ERR(qspi->io_base))
|
||||||
ret = PTR_ERR(qspi->io_base);
|
return PTR_ERR(qspi->io_base);
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
qspi->phys_base = res->start;
|
qspi->phys_base = res->start;
|
||||||
|
|
||||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");
|
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");
|
||||||
qspi->mm_base = devm_ioremap_resource(dev, res);
|
qspi->mm_base = devm_ioremap_resource(dev, res);
|
||||||
if (IS_ERR(qspi->mm_base)) {
|
if (IS_ERR(qspi->mm_base))
|
||||||
ret = PTR_ERR(qspi->mm_base);
|
return PTR_ERR(qspi->mm_base);
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
qspi->mm_size = resource_size(res);
|
qspi->mm_size = resource_size(res);
|
||||||
if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) {
|
if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ)
|
||||||
ret = -EINVAL;
|
return -EINVAL;
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (irq < 0) {
|
if (irq < 0)
|
||||||
ret = irq;
|
return irq;
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,
|
ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,
|
||||||
dev_name(dev), qspi);
|
dev_name(dev), qspi);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "failed to request irq\n");
|
dev_err(dev, "failed to request irq\n");
|
||||||
goto err_master_put;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
init_completion(&qspi->data_completion);
|
init_completion(&qspi->data_completion);
|
||||||
init_completion(&qspi->match_completion);
|
init_completion(&qspi->match_completion);
|
||||||
|
|
||||||
qspi->clk = devm_clk_get(dev, NULL);
|
qspi->clk = devm_clk_get(dev, NULL);
|
||||||
if (IS_ERR(qspi->clk)) {
|
if (IS_ERR(qspi->clk))
|
||||||
ret = PTR_ERR(qspi->clk);
|
return PTR_ERR(qspi->clk);
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
qspi->clk_rate = clk_get_rate(qspi->clk);
|
qspi->clk_rate = clk_get_rate(qspi->clk);
|
||||||
if (!qspi->clk_rate) {
|
if (!qspi->clk_rate)
|
||||||
ret = -EINVAL;
|
return -EINVAL;
|
||||||
goto err_master_put;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = clk_prepare_enable(qspi->clk);
|
ret = clk_prepare_enable(qspi->clk);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "can not enable the clock\n");
|
dev_err(dev, "can not enable the clock\n");
|
||||||
goto err_master_put;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
rstc = devm_reset_control_get_exclusive(dev, NULL);
|
rstc = devm_reset_control_get_exclusive(dev, NULL);
|
||||||
@@ -784,7 +772,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
|
|||||||
pm_runtime_enable(dev);
|
pm_runtime_enable(dev);
|
||||||
pm_runtime_get_noresume(dev);
|
pm_runtime_get_noresume(dev);
|
||||||
|
|
||||||
ret = devm_spi_register_master(dev, ctrl);
|
ret = spi_register_master(ctrl);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_pm_runtime_free;
|
goto err_pm_runtime_free;
|
||||||
|
|
||||||
@@ -806,8 +794,6 @@ err_dma_free:
|
|||||||
stm32_qspi_dma_free(qspi);
|
stm32_qspi_dma_free(qspi);
|
||||||
err_clk_disable:
|
err_clk_disable:
|
||||||
clk_disable_unprepare(qspi->clk);
|
clk_disable_unprepare(qspi->clk);
|
||||||
err_master_put:
|
|
||||||
spi_master_put(qspi->ctrl);
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@@ -817,6 +803,7 @@ static int stm32_qspi_remove(struct platform_device *pdev)
|
|||||||
struct stm32_qspi *qspi = platform_get_drvdata(pdev);
|
struct stm32_qspi *qspi = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
pm_runtime_get_sync(qspi->dev);
|
pm_runtime_get_sync(qspi->dev);
|
||||||
|
spi_unregister_master(qspi->ctrl);
|
||||||
/* disable qspi */
|
/* disable qspi */
|
||||||
writel_relaxed(0, qspi->io_base + QSPI_CR);
|
writel_relaxed(0, qspi->io_base + QSPI_CR);
|
||||||
stm32_qspi_dma_free(qspi);
|
stm32_qspi_dma_free(qspi);
|
||||||
|
|||||||
@@ -726,7 +726,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",
|
dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",
|
||||||
ret);
|
ret);
|
||||||
goto out_disable_clk;
|
goto out_release_dma;
|
||||||
}
|
}
|
||||||
dma_tx_burst = caps.max_burst;
|
dma_tx_burst = caps.max_burst;
|
||||||
}
|
}
|
||||||
@@ -735,7 +735,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
|
|||||||
if (IS_ERR_OR_NULL(master->dma_rx)) {
|
if (IS_ERR_OR_NULL(master->dma_rx)) {
|
||||||
if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
|
if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
|
||||||
ret = -EPROBE_DEFER;
|
ret = -EPROBE_DEFER;
|
||||||
goto out_disable_clk;
|
goto out_release_dma;
|
||||||
}
|
}
|
||||||
master->dma_rx = NULL;
|
master->dma_rx = NULL;
|
||||||
dma_rx_burst = INT_MAX;
|
dma_rx_burst = INT_MAX;
|
||||||
@@ -744,7 +744,7 @@ static int uniphier_spi_probe(struct platform_device *pdev)
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",
|
dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",
|
||||||
ret);
|
ret);
|
||||||
goto out_disable_clk;
|
goto out_release_dma;
|
||||||
}
|
}
|
||||||
dma_rx_burst = caps.max_burst;
|
dma_rx_burst = caps.max_burst;
|
||||||
}
|
}
|
||||||
@@ -753,10 +753,20 @@ static int uniphier_spi_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
ret = devm_spi_register_master(&pdev->dev, master);
|
ret = devm_spi_register_master(&pdev->dev, master);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_disable_clk;
|
goto out_release_dma;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
out_release_dma:
|
||||||
|
if (!IS_ERR_OR_NULL(master->dma_rx)) {
|
||||||
|
dma_release_channel(master->dma_rx);
|
||||||
|
master->dma_rx = NULL;
|
||||||
|
}
|
||||||
|
if (!IS_ERR_OR_NULL(master->dma_tx)) {
|
||||||
|
dma_release_channel(master->dma_tx);
|
||||||
|
master->dma_tx = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
out_disable_clk:
|
out_disable_clk:
|
||||||
clk_disable_unprepare(priv->clk);
|
clk_disable_unprepare(priv->clk);
|
||||||
|
|
||||||
|
|||||||
@@ -78,6 +78,26 @@ config FRAMEBUFFER_CONSOLE
|
|||||||
help
|
help
|
||||||
Low-level framebuffer-based console driver.
|
Low-level framebuffer-based console driver.
|
||||||
|
|
||||||
|
config FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
|
||||||
|
bool "Enable legacy fbcon hardware acceleration code"
|
||||||
|
depends on FRAMEBUFFER_CONSOLE
|
||||||
|
default y if PARISC
|
||||||
|
default n
|
||||||
|
help
|
||||||
|
This option enables the fbcon (framebuffer text-based) hardware
|
||||||
|
acceleration for graphics drivers which were written for the fbdev
|
||||||
|
graphics interface.
|
||||||
|
|
||||||
|
On modern machines, on mainstream machines (like x86-64) or when
|
||||||
|
using a modern Linux distribution those fbdev drivers usually aren't used.
|
||||||
|
So enabling this option wouldn't have any effect, which is why you want
|
||||||
|
to disable this option on such newer machines.
|
||||||
|
|
||||||
|
If you compile this kernel for older machines which still require the
|
||||||
|
fbdev drivers, you may want to say Y.
|
||||||
|
|
||||||
|
If unsure, select n.
|
||||||
|
|
||||||
config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY
|
config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY
|
||||||
bool "Map the console to the primary display device"
|
bool "Map the console to the primary display device"
|
||||||
depends on FRAMEBUFFER_CONSOLE
|
depends on FRAMEBUFFER_CONSOLE
|
||||||
|
|||||||
@@ -1025,7 +1025,7 @@ static void fbcon_init(struct vc_data *vc, int init)
|
|||||||
struct vc_data *svc = *default_mode;
|
struct vc_data *svc = *default_mode;
|
||||||
struct fbcon_display *t, *p = &fb_display[vc->vc_num];
|
struct fbcon_display *t, *p = &fb_display[vc->vc_num];
|
||||||
int logo = 1, new_rows, new_cols, rows, cols;
|
int logo = 1, new_rows, new_cols, rows, cols;
|
||||||
int ret;
|
int cap, ret;
|
||||||
|
|
||||||
if (WARN_ON(info_idx == -1))
|
if (WARN_ON(info_idx == -1))
|
||||||
return;
|
return;
|
||||||
@@ -1034,6 +1034,7 @@ static void fbcon_init(struct vc_data *vc, int init)
|
|||||||
con2fb_map[vc->vc_num] = info_idx;
|
con2fb_map[vc->vc_num] = info_idx;
|
||||||
|
|
||||||
info = registered_fb[con2fb_map[vc->vc_num]];
|
info = registered_fb[con2fb_map[vc->vc_num]];
|
||||||
|
cap = info->flags;
|
||||||
|
|
||||||
if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
|
if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)
|
||||||
logo_shown = FBCON_LOGO_DONTSHOW;
|
logo_shown = FBCON_LOGO_DONTSHOW;
|
||||||
@@ -1135,13 +1136,13 @@ static void fbcon_init(struct vc_data *vc, int init)
|
|||||||
|
|
||||||
ops->graphics = 0;
|
ops->graphics = 0;
|
||||||
|
|
||||||
/*
|
#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
|
||||||
* No more hw acceleration for fbcon.
|
if ((cap & FBINFO_HWACCEL_COPYAREA) &&
|
||||||
*
|
!(cap & FBINFO_HWACCEL_DISABLED))
|
||||||
* FIXME: Garbage collect all the now dead code after sufficient time
|
p->scrollmode = SCROLL_MOVE;
|
||||||
* has passed.
|
else /* default to something safe */
|
||||||
*/
|
p->scrollmode = SCROLL_REDRAW;
|
||||||
p->scrollmode = SCROLL_REDRAW;
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* ++guenther: console.c:vc_allocate() relies on initializing
|
* ++guenther: console.c:vc_allocate() relies on initializing
|
||||||
@@ -1706,7 +1707,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
|
|||||||
count = vc->vc_rows;
|
count = vc->vc_rows;
|
||||||
if (logo_shown >= 0)
|
if (logo_shown >= 0)
|
||||||
goto redraw_up;
|
goto redraw_up;
|
||||||
switch (p->scrollmode) {
|
switch (fb_scrollmode(p)) {
|
||||||
case SCROLL_MOVE:
|
case SCROLL_MOVE:
|
||||||
fbcon_redraw_blit(vc, info, p, t, b - t - count,
|
fbcon_redraw_blit(vc, info, p, t, b - t - count,
|
||||||
count);
|
count);
|
||||||
@@ -1796,7 +1797,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
|
|||||||
count = vc->vc_rows;
|
count = vc->vc_rows;
|
||||||
if (logo_shown >= 0)
|
if (logo_shown >= 0)
|
||||||
goto redraw_down;
|
goto redraw_down;
|
||||||
switch (p->scrollmode) {
|
switch (fb_scrollmode(p)) {
|
||||||
case SCROLL_MOVE:
|
case SCROLL_MOVE:
|
||||||
fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
|
fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
|
||||||
-count);
|
-count);
|
||||||
@@ -1947,6 +1948,48 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy,
|
|||||||
height, width);
|
height, width);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void updatescrollmode_accel(struct fbcon_display *p,
|
||||||
|
struct fb_info *info,
|
||||||
|
struct vc_data *vc)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
|
||||||
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
|
int cap = info->flags;
|
||||||
|
u16 t = 0;
|
||||||
|
int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep,
|
||||||
|
info->fix.xpanstep);
|
||||||
|
int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t);
|
||||||
|
int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
|
||||||
|
int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual,
|
||||||
|
info->var.xres_virtual);
|
||||||
|
int good_pan = (cap & FBINFO_HWACCEL_YPAN) &&
|
||||||
|
divides(ypan, vc->vc_font.height) && vyres > yres;
|
||||||
|
int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) &&
|
||||||
|
divides(ywrap, vc->vc_font.height) &&
|
||||||
|
divides(vc->vc_font.height, vyres) &&
|
||||||
|
divides(vc->vc_font.height, yres);
|
||||||
|
int reading_fast = cap & FBINFO_READS_FAST;
|
||||||
|
int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) &&
|
||||||
|
!(cap & FBINFO_HWACCEL_DISABLED);
|
||||||
|
int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) &&
|
||||||
|
!(cap & FBINFO_HWACCEL_DISABLED);
|
||||||
|
|
||||||
|
if (good_wrap || good_pan) {
|
||||||
|
if (reading_fast || fast_copyarea)
|
||||||
|
p->scrollmode = good_wrap ?
|
||||||
|
SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE;
|
||||||
|
else
|
||||||
|
p->scrollmode = good_wrap ? SCROLL_REDRAW :
|
||||||
|
SCROLL_PAN_REDRAW;
|
||||||
|
} else {
|
||||||
|
if (reading_fast || (fast_copyarea && !fast_imageblit))
|
||||||
|
p->scrollmode = SCROLL_MOVE;
|
||||||
|
else
|
||||||
|
p->scrollmode = SCROLL_REDRAW;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
static void updatescrollmode(struct fbcon_display *p,
|
static void updatescrollmode(struct fbcon_display *p,
|
||||||
struct fb_info *info,
|
struct fb_info *info,
|
||||||
struct vc_data *vc)
|
struct vc_data *vc)
|
||||||
@@ -1962,6 +2005,9 @@ static void updatescrollmode(struct fbcon_display *p,
|
|||||||
p->vrows -= (yres - (fh * vc->vc_rows)) / fh;
|
p->vrows -= (yres - (fh * vc->vc_rows)) / fh;
|
||||||
if ((yres % fh) && (vyres % fh < yres % fh))
|
if ((yres % fh) && (vyres % fh < yres % fh))
|
||||||
p->vrows--;
|
p->vrows--;
|
||||||
|
|
||||||
|
/* update scrollmode in case hardware acceleration is used */
|
||||||
|
updatescrollmode_accel(p, info, vc);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define PITCH(w) (((w) + 7) >> 3)
|
#define PITCH(w) (((w) + 7) >> 3)
|
||||||
@@ -2119,7 +2165,7 @@ static int fbcon_switch(struct vc_data *vc)
|
|||||||
|
|
||||||
updatescrollmode(p, info, vc);
|
updatescrollmode(p, info, vc);
|
||||||
|
|
||||||
switch (p->scrollmode) {
|
switch (fb_scrollmode(p)) {
|
||||||
case SCROLL_WRAP_MOVE:
|
case SCROLL_WRAP_MOVE:
|
||||||
scrollback_phys_max = p->vrows - vc->vc_rows;
|
scrollback_phys_max = p->vrows - vc->vc_rows;
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -29,7 +29,9 @@ struct fbcon_display {
|
|||||||
/* Filled in by the low-level console driver */
|
/* Filled in by the low-level console driver */
|
||||||
const u_char *fontdata;
|
const u_char *fontdata;
|
||||||
int userfont; /* != 0 if fontdata kmalloc()ed */
|
int userfont; /* != 0 if fontdata kmalloc()ed */
|
||||||
u_short scrollmode; /* Scroll Method */
|
#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
|
||||||
|
u_short scrollmode; /* Scroll Method, use fb_scrollmode() */
|
||||||
|
#endif
|
||||||
u_short inverse; /* != 0 text black on white as default */
|
u_short inverse; /* != 0 text black on white as default */
|
||||||
short yscroll; /* Hardware scrolling */
|
short yscroll; /* Hardware scrolling */
|
||||||
int vrows; /* number of virtual rows */
|
int vrows; /* number of virtual rows */
|
||||||
@@ -208,6 +210,17 @@ static inline int attr_col_ec(int shift, struct vc_data *vc,
|
|||||||
#define SCROLL_REDRAW 0x004
|
#define SCROLL_REDRAW 0x004
|
||||||
#define SCROLL_PAN_REDRAW 0x005
|
#define SCROLL_PAN_REDRAW 0x005
|
||||||
|
|
||||||
|
static inline u_short fb_scrollmode(struct fbcon_display *fb)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
|
||||||
|
return fb->scrollmode;
|
||||||
|
#else
|
||||||
|
/* hardcoded to SCROLL_REDRAW if acceleration was disabled. */
|
||||||
|
return SCROLL_REDRAW;
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
#ifdef CONFIG_FB_TILEBLITTING
|
#ifdef CONFIG_FB_TILEBLITTING
|
||||||
extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
|
extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -65,7 +65,7 @@ static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_copyarea area;
|
struct fb_copyarea area;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
|
|
||||||
area.sx = sy * vc->vc_font.height;
|
area.sx = sy * vc->vc_font.height;
|
||||||
area.sy = vyres - ((sx + width) * vc->vc_font.width);
|
area.sy = vyres - ((sx + width) * vc->vc_font.width);
|
||||||
@@ -83,7 +83,7 @@ static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_fillrect region;
|
struct fb_fillrect region;
|
||||||
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
|
|
||||||
region.color = attr_bgcol_ec(bgshift,vc,info);
|
region.color = attr_bgcol_ec(bgshift,vc,info);
|
||||||
region.dx = sy * vc->vc_font.height;
|
region.dx = sy * vc->vc_font.height;
|
||||||
@@ -140,7 +140,7 @@ static void ccw_putcs(struct vc_data *vc, struct fb_info *info,
|
|||||||
u32 cnt, pitch, size;
|
u32 cnt, pitch, size;
|
||||||
u32 attribute = get_attribute(info, scr_readw(s));
|
u32 attribute = get_attribute(info, scr_readw(s));
|
||||||
u8 *dst, *buf = NULL;
|
u8 *dst, *buf = NULL;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -229,7 +229,7 @@ static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
|
|||||||
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
||||||
int err = 1, dx, dy;
|
int err = 1, dx, dy;
|
||||||
char *src;
|
char *src;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -387,7 +387,7 @@ static int ccw_update_start(struct fb_info *info)
|
|||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
u32 yoffset;
|
u32 yoffset;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
yoffset = (vyres - info->var.yres) - ops->var.xoffset;
|
yoffset = (vyres - info->var.yres) - ops->var.xoffset;
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_copyarea area;
|
struct fb_copyarea area;
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
area.sx = vxres - ((sy + height) * vc->vc_font.height);
|
area.sx = vxres - ((sy + height) * vc->vc_font.height);
|
||||||
area.sy = sx * vc->vc_font.width;
|
area.sy = sx * vc->vc_font.width;
|
||||||
@@ -68,7 +68,7 @@ static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_fillrect region;
|
struct fb_fillrect region;
|
||||||
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
region.color = attr_bgcol_ec(bgshift,vc,info);
|
region.color = attr_bgcol_ec(bgshift,vc,info);
|
||||||
region.dx = vxres - ((sy + height) * vc->vc_font.height);
|
region.dx = vxres - ((sy + height) * vc->vc_font.height);
|
||||||
@@ -125,7 +125,7 @@ static void cw_putcs(struct vc_data *vc, struct fb_info *info,
|
|||||||
u32 cnt, pitch, size;
|
u32 cnt, pitch, size;
|
||||||
u32 attribute = get_attribute(info, scr_readw(s));
|
u32 attribute = get_attribute(info, scr_readw(s));
|
||||||
u8 *dst, *buf = NULL;
|
u8 *dst, *buf = NULL;
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -212,7 +212,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
|
|||||||
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
||||||
int err = 1, dx, dy;
|
int err = 1, dx, dy;
|
||||||
char *src;
|
char *src;
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -369,7 +369,7 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
|
|||||||
static int cw_update_start(struct fb_info *info)
|
static int cw_update_start(struct fb_info *info)
|
||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
u32 xoffset;
|
u32 xoffset;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
|||||||
@@ -12,11 +12,11 @@
|
|||||||
#define _FBCON_ROTATE_H
|
#define _FBCON_ROTATE_H
|
||||||
|
|
||||||
#define GETVYRES(s,i) ({ \
|
#define GETVYRES(s,i) ({ \
|
||||||
(s == SCROLL_REDRAW || s == SCROLL_MOVE) ? \
|
(fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE) ? \
|
||||||
(i)->var.yres : (i)->var.yres_virtual; })
|
(i)->var.yres : (i)->var.yres_virtual; })
|
||||||
|
|
||||||
#define GETVXRES(s,i) ({ \
|
#define GETVXRES(s,i) ({ \
|
||||||
(s == SCROLL_REDRAW || s == SCROLL_MOVE || !(i)->fix.xpanstep) ? \
|
(fb_scrollmode(s) == SCROLL_REDRAW || fb_scrollmode(s) == SCROLL_MOVE || !(i)->fix.xpanstep) ? \
|
||||||
(i)->var.xres : (i)->var.xres_virtual; })
|
(i)->var.xres : (i)->var.xres_virtual; })
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -50,8 +50,8 @@ static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_copyarea area;
|
struct fb_copyarea area;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
area.sy = vyres - ((sy + height) * vc->vc_font.height);
|
area.sy = vyres - ((sy + height) * vc->vc_font.height);
|
||||||
area.sx = vxres - ((sx + width) * vc->vc_font.width);
|
area.sx = vxres - ((sx + width) * vc->vc_font.width);
|
||||||
@@ -69,8 +69,8 @@ static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy,
|
|||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
struct fb_fillrect region;
|
struct fb_fillrect region;
|
||||||
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
region.color = attr_bgcol_ec(bgshift,vc,info);
|
region.color = attr_bgcol_ec(bgshift,vc,info);
|
||||||
region.dy = vyres - ((sy + height) * vc->vc_font.height);
|
region.dy = vyres - ((sy + height) * vc->vc_font.height);
|
||||||
@@ -162,8 +162,8 @@ static void ud_putcs(struct vc_data *vc, struct fb_info *info,
|
|||||||
u32 mod = vc->vc_font.width % 8, cnt, pitch, size;
|
u32 mod = vc->vc_font.width % 8, cnt, pitch, size;
|
||||||
u32 attribute = get_attribute(info, scr_readw(s));
|
u32 attribute = get_attribute(info, scr_readw(s));
|
||||||
u8 *dst, *buf = NULL;
|
u8 *dst, *buf = NULL;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -259,8 +259,8 @@ static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
|
|||||||
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
int attribute, use_sw = vc->vc_cursor_type & CUR_SW;
|
||||||
int err = 1, dx, dy;
|
int err = 1, dx, dy;
|
||||||
char *src;
|
char *src;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
|
|
||||||
if (!ops->fontbuffer)
|
if (!ops->fontbuffer)
|
||||||
return;
|
return;
|
||||||
@@ -410,8 +410,8 @@ static int ud_update_start(struct fb_info *info)
|
|||||||
{
|
{
|
||||||
struct fbcon_ops *ops = info->fbcon_par;
|
struct fbcon_ops *ops = info->fbcon_par;
|
||||||
int xoffset, yoffset;
|
int xoffset, yoffset;
|
||||||
u32 vyres = GETVYRES(ops->p->scrollmode, info);
|
u32 vyres = GETVYRES(ops->p, info);
|
||||||
u32 vxres = GETVXRES(ops->p->scrollmode, info);
|
u32 vxres = GETVXRES(ops->p, info);
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
xoffset = vxres - info->var.xres - ops->var.xoffset;
|
xoffset = vxres - info->var.xres - ops->var.xoffset;
|
||||||
|
|||||||
@@ -96,12 +96,8 @@ static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int any)
|
|||||||
dentry, dentry, from_kuid(&init_user_ns, uid),
|
dentry, dentry, from_kuid(&init_user_ns, uid),
|
||||||
any);
|
any);
|
||||||
ret = NULL;
|
ret = NULL;
|
||||||
|
|
||||||
if (d_inode(dentry))
|
|
||||||
ret = v9fs_fid_find_inode(d_inode(dentry), uid);
|
|
||||||
|
|
||||||
/* we'll recheck under lock if there's anything to look in */
|
/* we'll recheck under lock if there's anything to look in */
|
||||||
if (!ret && dentry->d_fsdata) {
|
if (dentry->d_fsdata) {
|
||||||
struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata;
|
struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata;
|
||||||
spin_lock(&dentry->d_lock);
|
spin_lock(&dentry->d_lock);
|
||||||
hlist_for_each_entry(fid, h, dlist) {
|
hlist_for_each_entry(fid, h, dlist) {
|
||||||
@@ -112,6 +108,9 @@ static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int any)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock(&dentry->d_lock);
|
spin_unlock(&dentry->d_lock);
|
||||||
|
} else {
|
||||||
|
if (dentry->d_inode)
|
||||||
|
ret = v9fs_fid_find_inode(dentry->d_inode, uid);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -2511,6 +2511,19 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
|
|||||||
int ret;
|
int ret;
|
||||||
bool dirty_bg_running;
|
bool dirty_bg_running;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This can only happen when we are doing read-only scrub on read-only
|
||||||
|
* mount.
|
||||||
|
* In that case we should not start a new transaction on read-only fs.
|
||||||
|
* Thus here we skip all chunk allocations.
|
||||||
|
*/
|
||||||
|
if (sb_rdonly(fs_info->sb)) {
|
||||||
|
mutex_lock(&fs_info->ro_block_group_mutex);
|
||||||
|
ret = inc_block_group_ro(cache, 0);
|
||||||
|
mutex_unlock(&fs_info->ro_block_group_mutex);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
do {
|
do {
|
||||||
trans = btrfs_join_transaction(fs_info->extent_root);
|
trans = btrfs_join_transaction(fs_info->extent_root);
|
||||||
if (IS_ERR(trans))
|
if (IS_ERR(trans))
|
||||||
|
|||||||
@@ -775,10 +775,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
|
|||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock(&fs_info->trans_lock);
|
trans->pending_snapshot = pending_snapshot;
|
||||||
list_add(&pending_snapshot->list,
|
|
||||||
&trans->transaction->pending_snapshots);
|
|
||||||
spin_unlock(&fs_info->trans_lock);
|
|
||||||
|
|
||||||
ret = btrfs_commit_transaction(trans);
|
ret = btrfs_commit_transaction(trans);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|||||||
@@ -1185,9 +1185,24 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
|
|||||||
struct btrfs_trans_handle *trans = NULL;
|
struct btrfs_trans_handle *trans = NULL;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We need to have subvol_sem write locked, to prevent races between
|
||||||
|
* concurrent tasks trying to disable quotas, because we will unlock
|
||||||
|
* and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.
|
||||||
|
*/
|
||||||
|
lockdep_assert_held_write(&fs_info->subvol_sem);
|
||||||
|
|
||||||
mutex_lock(&fs_info->qgroup_ioctl_lock);
|
mutex_lock(&fs_info->qgroup_ioctl_lock);
|
||||||
if (!fs_info->quota_root)
|
if (!fs_info->quota_root)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Request qgroup rescan worker to complete and wait for it. This wait
|
||||||
|
* must be done before transaction start for quota disable since it may
|
||||||
|
* deadlock with transaction by the qgroup rescan worker.
|
||||||
|
*/
|
||||||
|
clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
||||||
|
btrfs_qgroup_wait_for_completion(fs_info, false);
|
||||||
mutex_unlock(&fs_info->qgroup_ioctl_lock);
|
mutex_unlock(&fs_info->qgroup_ioctl_lock);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -1205,14 +1220,13 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
|
|||||||
if (IS_ERR(trans)) {
|
if (IS_ERR(trans)) {
|
||||||
ret = PTR_ERR(trans);
|
ret = PTR_ERR(trans);
|
||||||
trans = NULL;
|
trans = NULL;
|
||||||
|
set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!fs_info->quota_root)
|
if (!fs_info->quota_root)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
|
|
||||||
btrfs_qgroup_wait_for_completion(fs_info, false);
|
|
||||||
spin_lock(&fs_info->qgroup_lock);
|
spin_lock(&fs_info->qgroup_lock);
|
||||||
quota_root = fs_info->quota_root;
|
quota_root = fs_info->quota_root;
|
||||||
fs_info->quota_root = NULL;
|
fs_info->quota_root = NULL;
|
||||||
@@ -3379,6 +3393,9 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
|
|||||||
btrfs_warn(fs_info,
|
btrfs_warn(fs_info,
|
||||||
"qgroup rescan init failed, qgroup is not enabled");
|
"qgroup rescan init failed, qgroup is not enabled");
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
|
} else if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
|
||||||
|
/* Quota disable is in progress */
|
||||||
|
ret = -EBUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|||||||
@@ -2033,6 +2033,27 @@ static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info)
|
|||||||
btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
|
btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Add a pending snapshot associated with the given transaction handle to the
|
||||||
|
* respective handle. This must be called after the transaction commit started
|
||||||
|
* and while holding fs_info->trans_lock.
|
||||||
|
* This serves to guarantee a caller of btrfs_commit_transaction() that it can
|
||||||
|
* safely free the pending snapshot pointer in case btrfs_commit_transaction()
|
||||||
|
* returns an error.
|
||||||
|
*/
|
||||||
|
static void add_pending_snapshot(struct btrfs_trans_handle *trans)
|
||||||
|
{
|
||||||
|
struct btrfs_transaction *cur_trans = trans->transaction;
|
||||||
|
|
||||||
|
if (!trans->pending_snapshot)
|
||||||
|
return;
|
||||||
|
|
||||||
|
lockdep_assert_held(&trans->fs_info->trans_lock);
|
||||||
|
ASSERT(cur_trans->state >= TRANS_STATE_COMMIT_START);
|
||||||
|
|
||||||
|
list_add(&trans->pending_snapshot->list, &cur_trans->pending_snapshots);
|
||||||
|
}
|
||||||
|
|
||||||
int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
|
int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
|
||||||
{
|
{
|
||||||
struct btrfs_fs_info *fs_info = trans->fs_info;
|
struct btrfs_fs_info *fs_info = trans->fs_info;
|
||||||
@@ -2106,6 +2127,8 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
|
|||||||
if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
|
if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
|
||||||
enum btrfs_trans_state want_state = TRANS_STATE_COMPLETED;
|
enum btrfs_trans_state want_state = TRANS_STATE_COMPLETED;
|
||||||
|
|
||||||
|
add_pending_snapshot(trans);
|
||||||
|
|
||||||
spin_unlock(&fs_info->trans_lock);
|
spin_unlock(&fs_info->trans_lock);
|
||||||
refcount_inc(&cur_trans->use_count);
|
refcount_inc(&cur_trans->use_count);
|
||||||
|
|
||||||
@@ -2196,6 +2219,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
|
|||||||
* COMMIT_DOING so make sure to wait for num_writers to == 1 again.
|
* COMMIT_DOING so make sure to wait for num_writers to == 1 again.
|
||||||
*/
|
*/
|
||||||
spin_lock(&fs_info->trans_lock);
|
spin_lock(&fs_info->trans_lock);
|
||||||
|
add_pending_snapshot(trans);
|
||||||
cur_trans->state = TRANS_STATE_COMMIT_DOING;
|
cur_trans->state = TRANS_STATE_COMMIT_DOING;
|
||||||
spin_unlock(&fs_info->trans_lock);
|
spin_unlock(&fs_info->trans_lock);
|
||||||
wait_event(cur_trans->writer_wait,
|
wait_event(cur_trans->writer_wait,
|
||||||
|
|||||||
@@ -123,6 +123,8 @@ struct btrfs_trans_handle {
|
|||||||
struct btrfs_transaction *transaction;
|
struct btrfs_transaction *transaction;
|
||||||
struct btrfs_block_rsv *block_rsv;
|
struct btrfs_block_rsv *block_rsv;
|
||||||
struct btrfs_block_rsv *orig_rsv;
|
struct btrfs_block_rsv *orig_rsv;
|
||||||
|
/* Set by a task that wants to create a snapshot. */
|
||||||
|
struct btrfs_pending_snapshot *pending_snapshot;
|
||||||
refcount_t use_count;
|
refcount_t use_count;
|
||||||
unsigned int type;
|
unsigned int type;
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -2935,6 +2935,9 @@ void ext4_fc_replay_cleanup(struct super_block *sb);
|
|||||||
int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
|
int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
|
||||||
int __init ext4_fc_init_dentry_cache(void);
|
int __init ext4_fc_init_dentry_cache(void);
|
||||||
void ext4_fc_destroy_dentry_cache(void);
|
void ext4_fc_destroy_dentry_cache(void);
|
||||||
|
int ext4_fc_record_regions(struct super_block *sb, int ino,
|
||||||
|
ext4_lblk_t lblk, ext4_fsblk_t pblk,
|
||||||
|
int len, int replay);
|
||||||
|
|
||||||
/* mballoc.c */
|
/* mballoc.c */
|
||||||
extern const struct seq_operations ext4_mb_seq_groups_ops;
|
extern const struct seq_operations ext4_mb_seq_groups_ops;
|
||||||
|
|||||||
@@ -6096,11 +6096,15 @@ int ext4_ext_clear_bb(struct inode *inode)
|
|||||||
|
|
||||||
ext4_mb_mark_bb(inode->i_sb,
|
ext4_mb_mark_bb(inode->i_sb,
|
||||||
path[j].p_block, 1, 0);
|
path[j].p_block, 1, 0);
|
||||||
|
ext4_fc_record_regions(inode->i_sb, inode->i_ino,
|
||||||
|
0, path[j].p_block, 1, 1);
|
||||||
}
|
}
|
||||||
ext4_ext_drop_refs(path);
|
ext4_ext_drop_refs(path);
|
||||||
kfree(path);
|
kfree(path);
|
||||||
}
|
}
|
||||||
ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
|
ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
|
||||||
|
ext4_fc_record_regions(inode->i_sb, inode->i_ino,
|
||||||
|
map.m_lblk, map.m_pblk, map.m_len, 1);
|
||||||
}
|
}
|
||||||
cur = cur + map.m_len;
|
cur = cur + map.m_len;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1433,14 +1433,15 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
|
|||||||
if (state->fc_modified_inodes[i] == ino)
|
if (state->fc_modified_inodes[i] == ino)
|
||||||
return 0;
|
return 0;
|
||||||
if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
|
if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
|
||||||
state->fc_modified_inodes_size +=
|
|
||||||
EXT4_FC_REPLAY_REALLOC_INCREMENT;
|
|
||||||
state->fc_modified_inodes = krealloc(
|
state->fc_modified_inodes = krealloc(
|
||||||
state->fc_modified_inodes, sizeof(int) *
|
state->fc_modified_inodes,
|
||||||
state->fc_modified_inodes_size,
|
sizeof(int) * (state->fc_modified_inodes_size +
|
||||||
GFP_KERNEL);
|
EXT4_FC_REPLAY_REALLOC_INCREMENT),
|
||||||
|
GFP_KERNEL);
|
||||||
if (!state->fc_modified_inodes)
|
if (!state->fc_modified_inodes)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
state->fc_modified_inodes_size +=
|
||||||
|
EXT4_FC_REPLAY_REALLOC_INCREMENT;
|
||||||
}
|
}
|
||||||
state->fc_modified_inodes[state->fc_modified_inodes_used++] = ino;
|
state->fc_modified_inodes[state->fc_modified_inodes_used++] = ino;
|
||||||
return 0;
|
return 0;
|
||||||
@@ -1472,7 +1473,9 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
|
|||||||
}
|
}
|
||||||
inode = NULL;
|
inode = NULL;
|
||||||
|
|
||||||
ext4_fc_record_modified_inode(sb, ino);
|
ret = ext4_fc_record_modified_inode(sb, ino);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
raw_fc_inode = (struct ext4_inode *)
|
raw_fc_inode = (struct ext4_inode *)
|
||||||
(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
|
(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
|
||||||
@@ -1603,16 +1606,23 @@ out:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Record physical disk regions which are in use as per fast commit area. Our
|
* Record physical disk regions which are in use as per fast commit area,
|
||||||
* simple replay phase allocator excludes these regions from allocation.
|
* and used by inodes during replay phase. Our simple replay phase
|
||||||
|
* allocator excludes these regions from allocation.
|
||||||
*/
|
*/
|
||||||
static int ext4_fc_record_regions(struct super_block *sb, int ino,
|
int ext4_fc_record_regions(struct super_block *sb, int ino,
|
||||||
ext4_lblk_t lblk, ext4_fsblk_t pblk, int len)
|
ext4_lblk_t lblk, ext4_fsblk_t pblk, int len, int replay)
|
||||||
{
|
{
|
||||||
struct ext4_fc_replay_state *state;
|
struct ext4_fc_replay_state *state;
|
||||||
struct ext4_fc_alloc_region *region;
|
struct ext4_fc_alloc_region *region;
|
||||||
|
|
||||||
state = &EXT4_SB(sb)->s_fc_replay_state;
|
state = &EXT4_SB(sb)->s_fc_replay_state;
|
||||||
|
/*
|
||||||
|
* during replay phase, the fc_regions_valid may not same as
|
||||||
|
* fc_regions_used, update it when do new additions.
|
||||||
|
*/
|
||||||
|
if (replay && state->fc_regions_used != state->fc_regions_valid)
|
||||||
|
state->fc_regions_used = state->fc_regions_valid;
|
||||||
if (state->fc_regions_used == state->fc_regions_size) {
|
if (state->fc_regions_used == state->fc_regions_size) {
|
||||||
state->fc_regions_size +=
|
state->fc_regions_size +=
|
||||||
EXT4_FC_REPLAY_REALLOC_INCREMENT;
|
EXT4_FC_REPLAY_REALLOC_INCREMENT;
|
||||||
@@ -1630,6 +1640,9 @@ static int ext4_fc_record_regions(struct super_block *sb, int ino,
|
|||||||
region->pblk = pblk;
|
region->pblk = pblk;
|
||||||
region->len = len;
|
region->len = len;
|
||||||
|
|
||||||
|
if (replay)
|
||||||
|
state->fc_regions_valid++;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1661,6 +1674,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
|||||||
}
|
}
|
||||||
|
|
||||||
ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
|
ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
start = le32_to_cpu(ex->ee_block);
|
start = le32_to_cpu(ex->ee_block);
|
||||||
start_pblk = ext4_ext_pblock(ex);
|
start_pblk = ext4_ext_pblock(ex);
|
||||||
@@ -1678,18 +1693,14 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
|||||||
map.m_pblk = 0;
|
map.m_pblk = 0;
|
||||||
ret = ext4_map_blocks(NULL, inode, &map, 0);
|
ret = ext4_map_blocks(NULL, inode, &map, 0);
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
/* Range is not mapped */
|
/* Range is not mapped */
|
||||||
path = ext4_find_extent(inode, cur, NULL, 0);
|
path = ext4_find_extent(inode, cur, NULL, 0);
|
||||||
if (IS_ERR(path)) {
|
if (IS_ERR(path))
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
memset(&newex, 0, sizeof(newex));
|
memset(&newex, 0, sizeof(newex));
|
||||||
newex.ee_block = cpu_to_le32(cur);
|
newex.ee_block = cpu_to_le32(cur);
|
||||||
ext4_ext_store_pblock(
|
ext4_ext_store_pblock(
|
||||||
@@ -1703,10 +1714,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
|||||||
up_write((&EXT4_I(inode)->i_data_sem));
|
up_write((&EXT4_I(inode)->i_data_sem));
|
||||||
ext4_ext_drop_refs(path);
|
ext4_ext_drop_refs(path);
|
||||||
kfree(path);
|
kfree(path);
|
||||||
if (ret) {
|
if (ret)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
goto next;
|
goto next;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1719,10 +1728,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
|||||||
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
|
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
|
||||||
ext4_ext_is_unwritten(ex),
|
ext4_ext_is_unwritten(ex),
|
||||||
start_pblk + cur - start);
|
start_pblk + cur - start);
|
||||||
if (ret) {
|
if (ret)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
/*
|
/*
|
||||||
* Mark the old blocks as free since they aren't used
|
* Mark the old blocks as free since they aren't used
|
||||||
* anymore. We maintain an array of all the modified
|
* anymore. We maintain an array of all the modified
|
||||||
@@ -1742,10 +1749,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
|||||||
ext4_ext_is_unwritten(ex), map.m_pblk);
|
ext4_ext_is_unwritten(ex), map.m_pblk);
|
||||||
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
|
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
|
||||||
ext4_ext_is_unwritten(ex), map.m_pblk);
|
ext4_ext_is_unwritten(ex), map.m_pblk);
|
||||||
if (ret) {
|
if (ret)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
/*
|
/*
|
||||||
* We may have split the extent tree while toggling the state.
|
* We may have split the extent tree while toggling the state.
|
||||||
* Try to shrink the extent tree now.
|
* Try to shrink the extent tree now.
|
||||||
@@ -1757,6 +1762,7 @@ next:
|
|||||||
}
|
}
|
||||||
ext4_ext_replay_shrink_inode(inode, i_size_read(inode) >>
|
ext4_ext_replay_shrink_inode(inode, i_size_read(inode) >>
|
||||||
sb->s_blocksize_bits);
|
sb->s_blocksize_bits);
|
||||||
|
out:
|
||||||
iput(inode);
|
iput(inode);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@@ -1786,6 +1792,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
|
|||||||
}
|
}
|
||||||
|
|
||||||
ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
|
ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
|
jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
|
||||||
inode->i_ino, le32_to_cpu(lrange.fc_lblk),
|
inode->i_ino, le32_to_cpu(lrange.fc_lblk),
|
||||||
@@ -1795,10 +1803,8 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
|
|||||||
map.m_len = remaining;
|
map.m_len = remaining;
|
||||||
|
|
||||||
ret = ext4_map_blocks(NULL, inode, &map, 0);
|
ret = ext4_map_blocks(NULL, inode, &map, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
if (ret > 0) {
|
if (ret > 0) {
|
||||||
remaining -= ret;
|
remaining -= ret;
|
||||||
cur += ret;
|
cur += ret;
|
||||||
@@ -1810,18 +1816,17 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
|
|||||||
}
|
}
|
||||||
|
|
||||||
down_write(&EXT4_I(inode)->i_data_sem);
|
down_write(&EXT4_I(inode)->i_data_sem);
|
||||||
ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
|
ret = ext4_ext_remove_space(inode, le32_to_cpu(lrange.fc_lblk),
|
||||||
lrange.fc_lblk + lrange.fc_len - 1);
|
le32_to_cpu(lrange.fc_lblk) +
|
||||||
|
le32_to_cpu(lrange.fc_len) - 1);
|
||||||
up_write(&EXT4_I(inode)->i_data_sem);
|
up_write(&EXT4_I(inode)->i_data_sem);
|
||||||
if (ret) {
|
if (ret)
|
||||||
iput(inode);
|
goto out;
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
ext4_ext_replay_shrink_inode(inode,
|
ext4_ext_replay_shrink_inode(inode,
|
||||||
i_size_read(inode) >> sb->s_blocksize_bits);
|
i_size_read(inode) >> sb->s_blocksize_bits);
|
||||||
ext4_mark_inode_dirty(NULL, inode);
|
ext4_mark_inode_dirty(NULL, inode);
|
||||||
|
out:
|
||||||
iput(inode);
|
iput(inode);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1973,7 +1978,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
|
|||||||
ret = ext4_fc_record_regions(sb,
|
ret = ext4_fc_record_regions(sb,
|
||||||
le32_to_cpu(ext.fc_ino),
|
le32_to_cpu(ext.fc_ino),
|
||||||
le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
|
le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
|
||||||
ext4_ext_get_actual_len(ex));
|
ext4_ext_get_actual_len(ex), 0);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
break;
|
break;
|
||||||
ret = JBD2_FC_REPLAY_CONTINUE;
|
ret = JBD2_FC_REPLAY_CONTINUE;
|
||||||
|
|||||||
@@ -1147,7 +1147,15 @@ static void ext4_restore_inline_data(handle_t *handle, struct inode *inode,
|
|||||||
struct ext4_iloc *iloc,
|
struct ext4_iloc *iloc,
|
||||||
void *buf, int inline_size)
|
void *buf, int inline_size)
|
||||||
{
|
{
|
||||||
ext4_create_inline_data(handle, inode, inline_size);
|
int ret;
|
||||||
|
|
||||||
|
ret = ext4_create_inline_data(handle, inode, inline_size);
|
||||||
|
if (ret) {
|
||||||
|
ext4_msg(inode->i_sb, KERN_EMERG,
|
||||||
|
"error restoring inline_data for inode -- potential data loss! (inode %lu, error %d)",
|
||||||
|
inode->i_ino, ret);
|
||||||
|
return;
|
||||||
|
}
|
||||||
ext4_write_inline_data(inode, iloc, buf, 0, inline_size);
|
ext4_write_inline_data(inode, iloc, buf, 0, inline_size);
|
||||||
ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
|
ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5753,7 +5753,8 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
|
|||||||
struct super_block *sb = ar->inode->i_sb;
|
struct super_block *sb = ar->inode->i_sb;
|
||||||
ext4_group_t group;
|
ext4_group_t group;
|
||||||
ext4_grpblk_t blkoff;
|
ext4_grpblk_t blkoff;
|
||||||
int i = sb->s_blocksize;
|
ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);
|
||||||
|
ext4_grpblk_t i = 0;
|
||||||
ext4_fsblk_t goal, block;
|
ext4_fsblk_t goal, block;
|
||||||
struct ext4_super_block *es = EXT4_SB(sb)->s_es;
|
struct ext4_super_block *es = EXT4_SB(sb)->s_es;
|
||||||
|
|
||||||
@@ -5775,19 +5776,26 @@ static ext4_fsblk_t ext4_mb_new_blocks_simple(handle_t *handle,
|
|||||||
ext4_get_group_no_and_offset(sb,
|
ext4_get_group_no_and_offset(sb,
|
||||||
max(ext4_group_first_block_no(sb, group), goal),
|
max(ext4_group_first_block_no(sb, group), goal),
|
||||||
NULL, &blkoff);
|
NULL, &blkoff);
|
||||||
i = mb_find_next_zero_bit(bitmap_bh->b_data, sb->s_blocksize,
|
while (1) {
|
||||||
|
i = mb_find_next_zero_bit(bitmap_bh->b_data, max,
|
||||||
blkoff);
|
blkoff);
|
||||||
|
if (i >= max)
|
||||||
|
break;
|
||||||
|
if (ext4_fc_replay_check_excluded(sb,
|
||||||
|
ext4_group_first_block_no(sb, group) + i)) {
|
||||||
|
blkoff = i + 1;
|
||||||
|
} else
|
||||||
|
break;
|
||||||
|
}
|
||||||
brelse(bitmap_bh);
|
brelse(bitmap_bh);
|
||||||
if (i >= sb->s_blocksize)
|
if (i < max)
|
||||||
continue;
|
break;
|
||||||
if (ext4_fc_replay_check_excluded(sb,
|
|
||||||
ext4_group_first_block_no(sb, group) + i))
|
|
||||||
continue;
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (group >= ext4_get_groups_count(sb) && i >= sb->s_blocksize)
|
if (group >= ext4_get_groups_count(sb) || i >= max) {
|
||||||
|
*errp = -ENOSPC;
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
block = ext4_group_first_block_no(sb, group) + i;
|
block = ext4_group_first_block_no(sb, group) + i;
|
||||||
ext4_mb_mark_bb(sb, block, 1, 1);
|
ext4_mb_mark_bb(sb, block, 1, 1);
|
||||||
|
|||||||
@@ -4112,8 +4112,10 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
|
|||||||
status = nfserr_clid_inuse;
|
status = nfserr_clid_inuse;
|
||||||
if (client_has_state(old)
|
if (client_has_state(old)
|
||||||
&& !same_creds(&unconf->cl_cred,
|
&& !same_creds(&unconf->cl_cred,
|
||||||
&old->cl_cred))
|
&old->cl_cred)) {
|
||||||
|
old = NULL;
|
||||||
goto out;
|
goto out;
|
||||||
|
}
|
||||||
status = mark_client_expired_locked(old);
|
status = mark_client_expired_locked(old);
|
||||||
if (status) {
|
if (status) {
|
||||||
old = NULL;
|
old = NULL;
|
||||||
|
|||||||
@@ -15,6 +15,8 @@
|
|||||||
#include <linux/minmax.h>
|
#include <linux/minmax.h>
|
||||||
#include <linux/mm.h>
|
#include <linux/mm.h>
|
||||||
#include <linux/mmu_notifier.h>
|
#include <linux/mmu_notifier.h>
|
||||||
|
#include <linux/ftrace.h>
|
||||||
|
#include <linux/instrumentation.h>
|
||||||
#include <linux/preempt.h>
|
#include <linux/preempt.h>
|
||||||
#include <linux/msi.h>
|
#include <linux/msi.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
@@ -363,8 +365,11 @@ struct kvm_vcpu {
|
|||||||
int last_used_slot;
|
int last_used_slot;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* must be called with irqs disabled */
|
/*
|
||||||
static __always_inline void guest_enter_irqoff(void)
|
* Start accounting time towards a guest.
|
||||||
|
* Must be called before entering guest context.
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_timing_enter_irqoff(void)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* This is running in ioctl context so its safe to assume that it's the
|
* This is running in ioctl context so its safe to assume that it's the
|
||||||
@@ -373,7 +378,18 @@ static __always_inline void guest_enter_irqoff(void)
|
|||||||
instrumentation_begin();
|
instrumentation_begin();
|
||||||
vtime_account_guest_enter();
|
vtime_account_guest_enter();
|
||||||
instrumentation_end();
|
instrumentation_end();
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Enter guest context and enter an RCU extended quiescent state.
|
||||||
|
*
|
||||||
|
* Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is
|
||||||
|
* unsafe to use any code which may directly or indirectly use RCU, tracing
|
||||||
|
* (including IRQ flag tracing), or lockdep. All code in this period must be
|
||||||
|
* non-instrumentable.
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_context_enter_irqoff(void)
|
||||||
|
{
|
||||||
/*
|
/*
|
||||||
* KVM does not hold any references to rcu protected data when it
|
* KVM does not hold any references to rcu protected data when it
|
||||||
* switches CPU into a guest mode. In fact switching to a guest mode
|
* switches CPU into a guest mode. In fact switching to a guest mode
|
||||||
@@ -389,16 +405,79 @@ static __always_inline void guest_enter_irqoff(void)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static __always_inline void guest_exit_irqoff(void)
|
/*
|
||||||
|
* Deprecated. Architectures should move to guest_timing_enter_irqoff() and
|
||||||
|
* guest_state_enter_irqoff().
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_enter_irqoff(void)
|
||||||
|
{
|
||||||
|
guest_timing_enter_irqoff();
|
||||||
|
guest_context_enter_irqoff();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* guest_state_enter_irqoff - Fixup state when entering a guest
|
||||||
|
*
|
||||||
|
* Entry to a guest will enable interrupts, but the kernel state is interrupts
|
||||||
|
* disabled when this is invoked. Also tell RCU about it.
|
||||||
|
*
|
||||||
|
* 1) Trace interrupts on state
|
||||||
|
* 2) Invoke context tracking if enabled to adjust RCU state
|
||||||
|
* 3) Tell lockdep that interrupts are enabled
|
||||||
|
*
|
||||||
|
* Invoked from architecture specific code before entering a guest.
|
||||||
|
* Must be called with interrupts disabled and the caller must be
|
||||||
|
* non-instrumentable.
|
||||||
|
* The caller has to invoke guest_timing_enter_irqoff() before this.
|
||||||
|
*
|
||||||
|
* Note: this is analogous to exit_to_user_mode().
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_state_enter_irqoff(void)
|
||||||
|
{
|
||||||
|
instrumentation_begin();
|
||||||
|
trace_hardirqs_on_prepare();
|
||||||
|
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
||||||
|
instrumentation_end();
|
||||||
|
|
||||||
|
guest_context_enter_irqoff();
|
||||||
|
lockdep_hardirqs_on(CALLER_ADDR0);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Exit guest context and exit an RCU extended quiescent state.
|
||||||
|
*
|
||||||
|
* Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is
|
||||||
|
* unsafe to use any code which may directly or indirectly use RCU, tracing
|
||||||
|
* (including IRQ flag tracing), or lockdep. All code in this period must be
|
||||||
|
* non-instrumentable.
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_context_exit_irqoff(void)
|
||||||
{
|
{
|
||||||
context_tracking_guest_exit();
|
context_tracking_guest_exit();
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Stop accounting time towards a guest.
|
||||||
|
* Must be called after exiting guest context.
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_timing_exit_irqoff(void)
|
||||||
|
{
|
||||||
instrumentation_begin();
|
instrumentation_begin();
|
||||||
/* Flush the guest cputime we spent on the guest */
|
/* Flush the guest cputime we spent on the guest */
|
||||||
vtime_account_guest_exit();
|
vtime_account_guest_exit();
|
||||||
instrumentation_end();
|
instrumentation_end();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Deprecated. Architectures should move to guest_state_exit_irqoff() and
|
||||||
|
* guest_timing_exit_irqoff().
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_exit_irqoff(void)
|
||||||
|
{
|
||||||
|
guest_context_exit_irqoff();
|
||||||
|
guest_timing_exit_irqoff();
|
||||||
|
}
|
||||||
|
|
||||||
static inline void guest_exit(void)
|
static inline void guest_exit(void)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
@@ -408,6 +487,33 @@ static inline void guest_exit(void)
|
|||||||
local_irq_restore(flags);
|
local_irq_restore(flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* guest_state_exit_irqoff - Establish state when returning from guest mode
|
||||||
|
*
|
||||||
|
* Entry from a guest disables interrupts, but guest mode is traced as
|
||||||
|
* interrupts enabled. Also with NO_HZ_FULL RCU might be idle.
|
||||||
|
*
|
||||||
|
* 1) Tell lockdep that interrupts are disabled
|
||||||
|
* 2) Invoke context tracking if enabled to reactivate RCU
|
||||||
|
* 3) Trace interrupts off state
|
||||||
|
*
|
||||||
|
* Invoked from architecture specific code after exiting a guest.
|
||||||
|
* Must be invoked with interrupts disabled and the caller must be
|
||||||
|
* non-instrumentable.
|
||||||
|
* The caller has to invoke guest_timing_exit_irqoff() after this.
|
||||||
|
*
|
||||||
|
* Note: this is analogous to enter_from_user_mode().
|
||||||
|
*/
|
||||||
|
static __always_inline void guest_state_exit_irqoff(void)
|
||||||
|
{
|
||||||
|
lockdep_hardirqs_off(CALLER_ADDR0);
|
||||||
|
guest_context_exit_irqoff();
|
||||||
|
|
||||||
|
instrumentation_begin();
|
||||||
|
trace_hardirqs_off_finish();
|
||||||
|
instrumentation_end();
|
||||||
|
}
|
||||||
|
|
||||||
static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
|
static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -62,6 +62,7 @@ static inline unsigned long pte_index(unsigned long address)
|
|||||||
{
|
{
|
||||||
return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
|
return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
|
||||||
}
|
}
|
||||||
|
#define pte_index pte_index
|
||||||
|
|
||||||
#ifndef pmd_index
|
#ifndef pmd_index
|
||||||
static inline unsigned long pmd_index(unsigned long address)
|
static inline unsigned long pmd_index(unsigned long address)
|
||||||
|
|||||||
@@ -56,8 +56,10 @@
|
|||||||
* *
|
* *
|
||||||
****************************************************************************/
|
****************************************************************************/
|
||||||
|
|
||||||
|
#define AES_IEC958_STATUS_SIZE 24
|
||||||
|
|
||||||
struct snd_aes_iec958 {
|
struct snd_aes_iec958 {
|
||||||
unsigned char status[24]; /* AES/IEC958 channel status bits */
|
unsigned char status[AES_IEC958_STATUS_SIZE]; /* AES/IEC958 channel status bits */
|
||||||
unsigned char subcode[147]; /* AES/IEC958 subcode bits */
|
unsigned char subcode[147]; /* AES/IEC958 subcode bits */
|
||||||
unsigned char pad; /* nothing */
|
unsigned char pad; /* nothing */
|
||||||
unsigned char dig_subframe[4]; /* AES/IEC958 subframe bits */
|
unsigned char dig_subframe[4]; /* AES/IEC958 subframe bits */
|
||||||
|
|||||||
@@ -1964,6 +1964,7 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
|
|||||||
*/
|
*/
|
||||||
un = lookup_undo(ulp, semid);
|
un = lookup_undo(ulp, semid);
|
||||||
if (un) {
|
if (un) {
|
||||||
|
spin_unlock(&ulp->lock);
|
||||||
kvfree(new);
|
kvfree(new);
|
||||||
goto success;
|
goto success;
|
||||||
}
|
}
|
||||||
@@ -1976,9 +1977,8 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
|
|||||||
ipc_assert_locked_object(&sma->sem_perm);
|
ipc_assert_locked_object(&sma->sem_perm);
|
||||||
list_add(&new->list_id, &sma->list_id);
|
list_add(&new->list_id, &sma->list_id);
|
||||||
un = new;
|
un = new;
|
||||||
|
|
||||||
success:
|
|
||||||
spin_unlock(&ulp->lock);
|
spin_unlock(&ulp->lock);
|
||||||
|
success:
|
||||||
sem_unlock(sma, -1);
|
sem_unlock(sma, -1);
|
||||||
out:
|
out:
|
||||||
return un;
|
return un;
|
||||||
|
|||||||
@@ -541,20 +541,22 @@ static void kauditd_printk_skb(struct sk_buff *skb)
|
|||||||
/**
|
/**
|
||||||
* kauditd_rehold_skb - Handle a audit record send failure in the hold queue
|
* kauditd_rehold_skb - Handle a audit record send failure in the hold queue
|
||||||
* @skb: audit record
|
* @skb: audit record
|
||||||
|
* @error: error code (unused)
|
||||||
*
|
*
|
||||||
* Description:
|
* Description:
|
||||||
* This should only be used by the kauditd_thread when it fails to flush the
|
* This should only be used by the kauditd_thread when it fails to flush the
|
||||||
* hold queue.
|
* hold queue.
|
||||||
*/
|
*/
|
||||||
static void kauditd_rehold_skb(struct sk_buff *skb)
|
static void kauditd_rehold_skb(struct sk_buff *skb, __always_unused int error)
|
||||||
{
|
{
|
||||||
/* put the record back in the queue at the same place */
|
/* put the record back in the queue */
|
||||||
skb_queue_head(&audit_hold_queue, skb);
|
skb_queue_tail(&audit_hold_queue, skb);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* kauditd_hold_skb - Queue an audit record, waiting for auditd
|
* kauditd_hold_skb - Queue an audit record, waiting for auditd
|
||||||
* @skb: audit record
|
* @skb: audit record
|
||||||
|
* @error: error code
|
||||||
*
|
*
|
||||||
* Description:
|
* Description:
|
||||||
* Queue the audit record, waiting for an instance of auditd. When this
|
* Queue the audit record, waiting for an instance of auditd. When this
|
||||||
@@ -564,19 +566,31 @@ static void kauditd_rehold_skb(struct sk_buff *skb)
|
|||||||
* and queue it, if we have room. If we want to hold on to the record, but we
|
* and queue it, if we have room. If we want to hold on to the record, but we
|
||||||
* don't have room, record a record lost message.
|
* don't have room, record a record lost message.
|
||||||
*/
|
*/
|
||||||
static void kauditd_hold_skb(struct sk_buff *skb)
|
static void kauditd_hold_skb(struct sk_buff *skb, int error)
|
||||||
{
|
{
|
||||||
/* at this point it is uncertain if we will ever send this to auditd so
|
/* at this point it is uncertain if we will ever send this to auditd so
|
||||||
* try to send the message via printk before we go any further */
|
* try to send the message via printk before we go any further */
|
||||||
kauditd_printk_skb(skb);
|
kauditd_printk_skb(skb);
|
||||||
|
|
||||||
/* can we just silently drop the message? */
|
/* can we just silently drop the message? */
|
||||||
if (!audit_default) {
|
if (!audit_default)
|
||||||
kfree_skb(skb);
|
goto drop;
|
||||||
return;
|
|
||||||
|
/* the hold queue is only for when the daemon goes away completely,
|
||||||
|
* not -EAGAIN failures; if we are in a -EAGAIN state requeue the
|
||||||
|
* record on the retry queue unless it's full, in which case drop it
|
||||||
|
*/
|
||||||
|
if (error == -EAGAIN) {
|
||||||
|
if (!audit_backlog_limit ||
|
||||||
|
skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
|
||||||
|
skb_queue_tail(&audit_retry_queue, skb);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
audit_log_lost("kauditd retry queue overflow");
|
||||||
|
goto drop;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* if we have room, queue the message */
|
/* if we have room in the hold queue, queue the message */
|
||||||
if (!audit_backlog_limit ||
|
if (!audit_backlog_limit ||
|
||||||
skb_queue_len(&audit_hold_queue) < audit_backlog_limit) {
|
skb_queue_len(&audit_hold_queue) < audit_backlog_limit) {
|
||||||
skb_queue_tail(&audit_hold_queue, skb);
|
skb_queue_tail(&audit_hold_queue, skb);
|
||||||
@@ -585,24 +599,32 @@ static void kauditd_hold_skb(struct sk_buff *skb)
|
|||||||
|
|
||||||
/* we have no other options - drop the message */
|
/* we have no other options - drop the message */
|
||||||
audit_log_lost("kauditd hold queue overflow");
|
audit_log_lost("kauditd hold queue overflow");
|
||||||
|
drop:
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* kauditd_retry_skb - Queue an audit record, attempt to send again to auditd
|
* kauditd_retry_skb - Queue an audit record, attempt to send again to auditd
|
||||||
* @skb: audit record
|
* @skb: audit record
|
||||||
|
* @error: error code (unused)
|
||||||
*
|
*
|
||||||
* Description:
|
* Description:
|
||||||
* Not as serious as kauditd_hold_skb() as we still have a connected auditd,
|
* Not as serious as kauditd_hold_skb() as we still have a connected auditd,
|
||||||
* but for some reason we are having problems sending it audit records so
|
* but for some reason we are having problems sending it audit records so
|
||||||
* queue the given record and attempt to resend.
|
* queue the given record and attempt to resend.
|
||||||
*/
|
*/
|
||||||
static void kauditd_retry_skb(struct sk_buff *skb)
|
static void kauditd_retry_skb(struct sk_buff *skb, __always_unused int error)
|
||||||
{
|
{
|
||||||
/* NOTE: because records should only live in the retry queue for a
|
if (!audit_backlog_limit ||
|
||||||
* short period of time, before either being sent or moved to the hold
|
skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {
|
||||||
* queue, we don't currently enforce a limit on this queue */
|
skb_queue_tail(&audit_retry_queue, skb);
|
||||||
skb_queue_tail(&audit_retry_queue, skb);
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* we have to drop the record, send it via printk as a last effort */
|
||||||
|
kauditd_printk_skb(skb);
|
||||||
|
audit_log_lost("kauditd retry queue overflow");
|
||||||
|
kfree_skb(skb);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -640,7 +662,7 @@ static void auditd_reset(const struct auditd_connection *ac)
|
|||||||
/* flush the retry queue to the hold queue, but don't touch the main
|
/* flush the retry queue to the hold queue, but don't touch the main
|
||||||
* queue since we need to process that normally for multicast */
|
* queue since we need to process that normally for multicast */
|
||||||
while ((skb = skb_dequeue(&audit_retry_queue)))
|
while ((skb = skb_dequeue(&audit_retry_queue)))
|
||||||
kauditd_hold_skb(skb);
|
kauditd_hold_skb(skb, -ECONNREFUSED);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -714,16 +736,18 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
|
|||||||
struct sk_buff_head *queue,
|
struct sk_buff_head *queue,
|
||||||
unsigned int retry_limit,
|
unsigned int retry_limit,
|
||||||
void (*skb_hook)(struct sk_buff *skb),
|
void (*skb_hook)(struct sk_buff *skb),
|
||||||
void (*err_hook)(struct sk_buff *skb))
|
void (*err_hook)(struct sk_buff *skb, int error))
|
||||||
{
|
{
|
||||||
int rc = 0;
|
int rc = 0;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb = NULL;
|
||||||
|
struct sk_buff *skb_tail;
|
||||||
unsigned int failed = 0;
|
unsigned int failed = 0;
|
||||||
|
|
||||||
/* NOTE: kauditd_thread takes care of all our locking, we just use
|
/* NOTE: kauditd_thread takes care of all our locking, we just use
|
||||||
* the netlink info passed to us (e.g. sk and portid) */
|
* the netlink info passed to us (e.g. sk and portid) */
|
||||||
|
|
||||||
while ((skb = skb_dequeue(queue))) {
|
skb_tail = skb_peek_tail(queue);
|
||||||
|
while ((skb != skb_tail) && (skb = skb_dequeue(queue))) {
|
||||||
/* call the skb_hook for each skb we touch */
|
/* call the skb_hook for each skb we touch */
|
||||||
if (skb_hook)
|
if (skb_hook)
|
||||||
(*skb_hook)(skb);
|
(*skb_hook)(skb);
|
||||||
@@ -731,7 +755,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
|
|||||||
/* can we send to anyone via unicast? */
|
/* can we send to anyone via unicast? */
|
||||||
if (!sk) {
|
if (!sk) {
|
||||||
if (err_hook)
|
if (err_hook)
|
||||||
(*err_hook)(skb);
|
(*err_hook)(skb, -ECONNREFUSED);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -745,7 +769,7 @@ retry:
|
|||||||
rc == -ECONNREFUSED || rc == -EPERM) {
|
rc == -ECONNREFUSED || rc == -EPERM) {
|
||||||
sk = NULL;
|
sk = NULL;
|
||||||
if (err_hook)
|
if (err_hook)
|
||||||
(*err_hook)(skb);
|
(*err_hook)(skb, rc);
|
||||||
if (rc == -EAGAIN)
|
if (rc == -EAGAIN)
|
||||||
rc = 0;
|
rc = 0;
|
||||||
/* continue to drain the queue */
|
/* continue to drain the queue */
|
||||||
|
|||||||
@@ -104,7 +104,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
|
|||||||
}
|
}
|
||||||
|
|
||||||
rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
|
rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
|
||||||
VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
|
VM_MAP | VM_USERMAP, PAGE_KERNEL);
|
||||||
if (rb) {
|
if (rb) {
|
||||||
kmemleak_not_leak(pages);
|
kmemleak_not_leak(pages);
|
||||||
rb->pages = pages;
|
rb->pages = pages;
|
||||||
|
|||||||
@@ -1523,10 +1523,15 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
|
|||||||
struct cpuset *sibling;
|
struct cpuset *sibling;
|
||||||
struct cgroup_subsys_state *pos_css;
|
struct cgroup_subsys_state *pos_css;
|
||||||
|
|
||||||
|
percpu_rwsem_assert_held(&cpuset_rwsem);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check all its siblings and call update_cpumasks_hier()
|
* Check all its siblings and call update_cpumasks_hier()
|
||||||
* if their use_parent_ecpus flag is set in order for them
|
* if their use_parent_ecpus flag is set in order for them
|
||||||
* to use the right effective_cpus value.
|
* to use the right effective_cpus value.
|
||||||
|
*
|
||||||
|
* The update_cpumasks_hier() function may sleep. So we have to
|
||||||
|
* release the RCU read lock before calling it.
|
||||||
*/
|
*/
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
cpuset_for_each_child(sibling, pos_css, parent) {
|
cpuset_for_each_child(sibling, pos_css, parent) {
|
||||||
@@ -1534,8 +1539,13 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
|
|||||||
continue;
|
continue;
|
||||||
if (!sibling->use_parent_ecpus)
|
if (!sibling->use_parent_ecpus)
|
||||||
continue;
|
continue;
|
||||||
|
if (!css_tryget_online(&sibling->css))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
rcu_read_unlock();
|
||||||
update_cpumasks_hier(sibling, tmp);
|
update_cpumasks_hier(sibling, tmp);
|
||||||
|
rcu_read_lock();
|
||||||
|
css_put(&sibling->css);
|
||||||
}
|
}
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3234,6 +3234,15 @@ static int perf_event_modify_breakpoint(struct perf_event *bp,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Copy event-type-independent attributes that may be modified.
|
||||||
|
*/
|
||||||
|
static void perf_event_modify_copy_attr(struct perf_event_attr *to,
|
||||||
|
const struct perf_event_attr *from)
|
||||||
|
{
|
||||||
|
to->sig_data = from->sig_data;
|
||||||
|
}
|
||||||
|
|
||||||
static int perf_event_modify_attr(struct perf_event *event,
|
static int perf_event_modify_attr(struct perf_event *event,
|
||||||
struct perf_event_attr *attr)
|
struct perf_event_attr *attr)
|
||||||
{
|
{
|
||||||
@@ -3256,10 +3265,17 @@ static int perf_event_modify_attr(struct perf_event *event,
|
|||||||
WARN_ON_ONCE(event->ctx->parent_ctx);
|
WARN_ON_ONCE(event->ctx->parent_ctx);
|
||||||
|
|
||||||
mutex_lock(&event->child_mutex);
|
mutex_lock(&event->child_mutex);
|
||||||
|
/*
|
||||||
|
* Event-type-independent attributes must be copied before event-type
|
||||||
|
* modification, which will validate that final attributes match the
|
||||||
|
* source attributes after all relevant attributes have been copied.
|
||||||
|
*/
|
||||||
|
perf_event_modify_copy_attr(&event->attr, attr);
|
||||||
err = func(event, attr);
|
err = func(event, attr);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
list_for_each_entry(child, &event->child_list, child_list) {
|
list_for_each_entry(child, &event->child_list, child_list) {
|
||||||
|
perf_event_modify_copy_attr(&child->attr, attr);
|
||||||
err = func(child, attr);
|
err = func(child, attr);
|
||||||
if (err)
|
if (err)
|
||||||
goto out;
|
goto out;
|
||||||
|
|||||||
@@ -171,6 +171,8 @@ static void __init pte_advanced_tests(struct pgtable_debug_args *args)
|
|||||||
ptep_test_and_clear_young(args->vma, args->vaddr, args->ptep);
|
ptep_test_and_clear_young(args->vma, args->vaddr, args->ptep);
|
||||||
pte = ptep_get(args->ptep);
|
pte = ptep_get(args->ptep);
|
||||||
WARN_ON(pte_young(pte));
|
WARN_ON(pte_young(pte));
|
||||||
|
|
||||||
|
ptep_get_and_clear_full(args->mm, args->vaddr, args->ptep, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
|
static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
|
||||||
|
|||||||
@@ -1403,7 +1403,8 @@ static void kmemleak_scan(void)
|
|||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
struct kmemleak_object *object;
|
struct kmemleak_object *object;
|
||||||
int i;
|
struct zone *zone;
|
||||||
|
int __maybe_unused i;
|
||||||
int new_leaks = 0;
|
int new_leaks = 0;
|
||||||
|
|
||||||
jiffies_last_scan = jiffies;
|
jiffies_last_scan = jiffies;
|
||||||
@@ -1443,9 +1444,9 @@ static void kmemleak_scan(void)
|
|||||||
* Struct page scanning for each node.
|
* Struct page scanning for each node.
|
||||||
*/
|
*/
|
||||||
get_online_mems();
|
get_online_mems();
|
||||||
for_each_online_node(i) {
|
for_each_populated_zone(zone) {
|
||||||
unsigned long start_pfn = node_start_pfn(i);
|
unsigned long start_pfn = zone->zone_start_pfn;
|
||||||
unsigned long end_pfn = node_end_pfn(i);
|
unsigned long end_pfn = zone_end_pfn(zone);
|
||||||
unsigned long pfn;
|
unsigned long pfn;
|
||||||
|
|
||||||
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
|
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
|
||||||
@@ -1454,8 +1455,8 @@ static void kmemleak_scan(void)
|
|||||||
if (!page)
|
if (!page)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
/* only scan pages belonging to this node */
|
/* only scan pages belonging to this zone */
|
||||||
if (page_to_nid(page) != i)
|
if (page_zone(page) != zone)
|
||||||
continue;
|
continue;
|
||||||
/* only scan if page is in use */
|
/* only scan if page is in use */
|
||||||
if (page_count(page) == 0)
|
if (page_count(page) == 0)
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ static void nft_reject_br_send_v4_tcp_reset(struct net *net,
|
|||||||
{
|
{
|
||||||
struct sk_buff *nskb;
|
struct sk_buff *nskb;
|
||||||
|
|
||||||
nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, dev, hook);
|
nskb = nf_reject_skb_v4_tcp_reset(net, oldskb, NULL, hook);
|
||||||
if (!nskb)
|
if (!nskb)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -65,7 +65,7 @@ static void nft_reject_br_send_v4_unreach(struct net *net,
|
|||||||
{
|
{
|
||||||
struct sk_buff *nskb;
|
struct sk_buff *nskb;
|
||||||
|
|
||||||
nskb = nf_reject_skb_v4_unreach(net, oldskb, dev, hook, code);
|
nskb = nf_reject_skb_v4_unreach(net, oldskb, NULL, hook, code);
|
||||||
if (!nskb)
|
if (!nskb)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -81,7 +81,7 @@ static void nft_reject_br_send_v6_tcp_reset(struct net *net,
|
|||||||
{
|
{
|
||||||
struct sk_buff *nskb;
|
struct sk_buff *nskb;
|
||||||
|
|
||||||
nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, dev, hook);
|
nskb = nf_reject_skb_v6_tcp_reset(net, oldskb, NULL, hook);
|
||||||
if (!nskb)
|
if (!nskb)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@@ -98,7 +98,7 @@ static void nft_reject_br_send_v6_unreach(struct net *net,
|
|||||||
{
|
{
|
||||||
struct sk_buff *nskb;
|
struct sk_buff *nskb;
|
||||||
|
|
||||||
nskb = nf_reject_skb_v6_unreach(net, oldskb, dev, hook, code);
|
nskb = nf_reject_skb_v6_unreach(net, oldskb, NULL, hook, code);
|
||||||
if (!nskb)
|
if (!nskb)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|||||||
@@ -1441,7 +1441,7 @@ static int nl802154_send_key(struct sk_buff *msg, u32 cmd, u32 portid,
|
|||||||
|
|
||||||
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
||||||
if (!hdr)
|
if (!hdr)
|
||||||
return -1;
|
return -ENOBUFS;
|
||||||
|
|
||||||
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
@@ -1634,7 +1634,7 @@ static int nl802154_send_device(struct sk_buff *msg, u32 cmd, u32 portid,
|
|||||||
|
|
||||||
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
||||||
if (!hdr)
|
if (!hdr)
|
||||||
return -1;
|
return -ENOBUFS;
|
||||||
|
|
||||||
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
@@ -1812,7 +1812,7 @@ static int nl802154_send_devkey(struct sk_buff *msg, u32 cmd, u32 portid,
|
|||||||
|
|
||||||
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
||||||
if (!hdr)
|
if (!hdr)
|
||||||
return -1;
|
return -ENOBUFS;
|
||||||
|
|
||||||
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
@@ -1988,7 +1988,7 @@ static int nl802154_send_seclevel(struct sk_buff *msg, u32 cmd, u32 portid,
|
|||||||
|
|
||||||
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
hdr = nl802154hdr_put(msg, portid, seq, flags, cmd);
|
||||||
if (!hdr)
|
if (!hdr)
|
||||||
return -1;
|
return -ENOBUFS;
|
||||||
|
|
||||||
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
if (nla_put_u32(msg, NL802154_ATTR_IFINDEX, dev->ifindex))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
|
|||||||
@@ -459,6 +459,18 @@ static unsigned int fill_remote_addresses_vec(struct mptcp_sock *msk, bool fullm
|
|||||||
return i;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct mptcp_pm_addr_entry *
|
||||||
|
__lookup_addr(struct pm_nl_pernet *pernet, struct mptcp_addr_info *info)
|
||||||
|
{
|
||||||
|
struct mptcp_pm_addr_entry *entry;
|
||||||
|
|
||||||
|
list_for_each_entry(entry, &pernet->local_addr_list, list) {
|
||||||
|
if (addresses_equal(&entry->addr, info, true))
|
||||||
|
return entry;
|
||||||
|
}
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
|
static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
|
||||||
{
|
{
|
||||||
struct sock *sk = (struct sock *)msk;
|
struct sock *sk = (struct sock *)msk;
|
||||||
@@ -1725,17 +1737,21 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
|
|||||||
if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP)
|
if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP)
|
||||||
bkup = 1;
|
bkup = 1;
|
||||||
|
|
||||||
list_for_each_entry(entry, &pernet->local_addr_list, list) {
|
spin_lock_bh(&pernet->lock);
|
||||||
if (addresses_equal(&entry->addr, &addr.addr, true)) {
|
entry = __lookup_addr(pernet, &addr.addr);
|
||||||
mptcp_nl_addr_backup(net, &entry->addr, bkup);
|
if (!entry) {
|
||||||
|
spin_unlock_bh(&pernet->lock);
|
||||||
if (bkup)
|
return -EINVAL;
|
||||||
entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
|
|
||||||
else
|
|
||||||
entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (bkup)
|
||||||
|
entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
|
||||||
|
else
|
||||||
|
entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
|
||||||
|
addr = *entry;
|
||||||
|
spin_unlock_bh(&pernet->lock);
|
||||||
|
|
||||||
|
mptcp_nl_addr_backup(net, &addr.addr, bkup);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
133
net/smc/af_smc.c
133
net/smc/af_smc.c
@@ -548,17 +548,115 @@ static void smc_stat_fallback(struct smc_sock *smc)
|
|||||||
mutex_unlock(&net->smc.mutex_fback_rsn);
|
mutex_unlock(&net->smc.mutex_fback_rsn);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* must be called under rcu read lock */
|
||||||
|
static void smc_fback_wakeup_waitqueue(struct smc_sock *smc, void *key)
|
||||||
|
{
|
||||||
|
struct socket_wq *wq;
|
||||||
|
__poll_t flags;
|
||||||
|
|
||||||
|
wq = rcu_dereference(smc->sk.sk_wq);
|
||||||
|
if (!skwq_has_sleeper(wq))
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* wake up smc sk->sk_wq */
|
||||||
|
if (!key) {
|
||||||
|
/* sk_state_change */
|
||||||
|
wake_up_interruptible_all(&wq->wait);
|
||||||
|
} else {
|
||||||
|
flags = key_to_poll(key);
|
||||||
|
if (flags & (EPOLLIN | EPOLLOUT))
|
||||||
|
/* sk_data_ready or sk_write_space */
|
||||||
|
wake_up_interruptible_sync_poll(&wq->wait, flags);
|
||||||
|
else if (flags & EPOLLERR)
|
||||||
|
/* sk_error_report */
|
||||||
|
wake_up_interruptible_poll(&wq->wait, flags);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int smc_fback_mark_woken(wait_queue_entry_t *wait,
|
||||||
|
unsigned int mode, int sync, void *key)
|
||||||
|
{
|
||||||
|
struct smc_mark_woken *mark =
|
||||||
|
container_of(wait, struct smc_mark_woken, wait_entry);
|
||||||
|
|
||||||
|
mark->woken = true;
|
||||||
|
mark->key = key;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void smc_fback_forward_wakeup(struct smc_sock *smc, struct sock *clcsk,
|
||||||
|
void (*clcsock_callback)(struct sock *sk))
|
||||||
|
{
|
||||||
|
struct smc_mark_woken mark = { .woken = false };
|
||||||
|
struct socket_wq *wq;
|
||||||
|
|
||||||
|
init_waitqueue_func_entry(&mark.wait_entry,
|
||||||
|
smc_fback_mark_woken);
|
||||||
|
rcu_read_lock();
|
||||||
|
wq = rcu_dereference(clcsk->sk_wq);
|
||||||
|
if (!wq)
|
||||||
|
goto out;
|
||||||
|
add_wait_queue(sk_sleep(clcsk), &mark.wait_entry);
|
||||||
|
clcsock_callback(clcsk);
|
||||||
|
remove_wait_queue(sk_sleep(clcsk), &mark.wait_entry);
|
||||||
|
|
||||||
|
if (mark.woken)
|
||||||
|
smc_fback_wakeup_waitqueue(smc, mark.key);
|
||||||
|
out:
|
||||||
|
rcu_read_unlock();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void smc_fback_state_change(struct sock *clcsk)
|
||||||
|
{
|
||||||
|
struct smc_sock *smc =
|
||||||
|
smc_clcsock_user_data(clcsk);
|
||||||
|
|
||||||
|
if (!smc)
|
||||||
|
return;
|
||||||
|
smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_state_change);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void smc_fback_data_ready(struct sock *clcsk)
|
||||||
|
{
|
||||||
|
struct smc_sock *smc =
|
||||||
|
smc_clcsock_user_data(clcsk);
|
||||||
|
|
||||||
|
if (!smc)
|
||||||
|
return;
|
||||||
|
smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_data_ready);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void smc_fback_write_space(struct sock *clcsk)
|
||||||
|
{
|
||||||
|
struct smc_sock *smc =
|
||||||
|
smc_clcsock_user_data(clcsk);
|
||||||
|
|
||||||
|
if (!smc)
|
||||||
|
return;
|
||||||
|
smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_write_space);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void smc_fback_error_report(struct sock *clcsk)
|
||||||
|
{
|
||||||
|
struct smc_sock *smc =
|
||||||
|
smc_clcsock_user_data(clcsk);
|
||||||
|
|
||||||
|
if (!smc)
|
||||||
|
return;
|
||||||
|
smc_fback_forward_wakeup(smc, clcsk, smc->clcsk_error_report);
|
||||||
|
}
|
||||||
|
|
||||||
static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
|
static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
|
||||||
{
|
{
|
||||||
wait_queue_head_t *smc_wait = sk_sleep(&smc->sk);
|
struct sock *clcsk;
|
||||||
wait_queue_head_t *clc_wait;
|
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
mutex_lock(&smc->clcsock_release_lock);
|
mutex_lock(&smc->clcsock_release_lock);
|
||||||
if (!smc->clcsock) {
|
if (!smc->clcsock) {
|
||||||
mutex_unlock(&smc->clcsock_release_lock);
|
mutex_unlock(&smc->clcsock_release_lock);
|
||||||
return -EBADF;
|
return -EBADF;
|
||||||
}
|
}
|
||||||
|
clcsk = smc->clcsock->sk;
|
||||||
|
|
||||||
smc->use_fallback = true;
|
smc->use_fallback = true;
|
||||||
smc->fallback_rsn = reason_code;
|
smc->fallback_rsn = reason_code;
|
||||||
smc_stat_fallback(smc);
|
smc_stat_fallback(smc);
|
||||||
@@ -568,16 +666,22 @@ static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
|
|||||||
smc->clcsock->wq.fasync_list =
|
smc->clcsock->wq.fasync_list =
|
||||||
smc->sk.sk_socket->wq.fasync_list;
|
smc->sk.sk_socket->wq.fasync_list;
|
||||||
|
|
||||||
/* There may be some entries remaining in
|
/* There might be some wait entries remaining
|
||||||
* smc socket->wq, which should be removed
|
* in smc sk->sk_wq and they should be woken up
|
||||||
* to clcsocket->wq during the fallback.
|
* as clcsock's wait queue is woken up.
|
||||||
*/
|
*/
|
||||||
clc_wait = sk_sleep(smc->clcsock->sk);
|
smc->clcsk_state_change = clcsk->sk_state_change;
|
||||||
spin_lock_irqsave(&smc_wait->lock, flags);
|
smc->clcsk_data_ready = clcsk->sk_data_ready;
|
||||||
spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING);
|
smc->clcsk_write_space = clcsk->sk_write_space;
|
||||||
list_splice_init(&smc_wait->head, &clc_wait->head);
|
smc->clcsk_error_report = clcsk->sk_error_report;
|
||||||
spin_unlock(&clc_wait->lock);
|
|
||||||
spin_unlock_irqrestore(&smc_wait->lock, flags);
|
clcsk->sk_state_change = smc_fback_state_change;
|
||||||
|
clcsk->sk_data_ready = smc_fback_data_ready;
|
||||||
|
clcsk->sk_write_space = smc_fback_write_space;
|
||||||
|
clcsk->sk_error_report = smc_fback_error_report;
|
||||||
|
|
||||||
|
smc->clcsock->sk->sk_user_data =
|
||||||
|
(void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY);
|
||||||
}
|
}
|
||||||
mutex_unlock(&smc->clcsock_release_lock);
|
mutex_unlock(&smc->clcsock_release_lock);
|
||||||
return 0;
|
return 0;
|
||||||
@@ -1909,10 +2013,9 @@ out:
|
|||||||
|
|
||||||
static void smc_clcsock_data_ready(struct sock *listen_clcsock)
|
static void smc_clcsock_data_ready(struct sock *listen_clcsock)
|
||||||
{
|
{
|
||||||
struct smc_sock *lsmc;
|
struct smc_sock *lsmc =
|
||||||
|
smc_clcsock_user_data(listen_clcsock);
|
||||||
|
|
||||||
lsmc = (struct smc_sock *)
|
|
||||||
((uintptr_t)listen_clcsock->sk_user_data & ~SK_USER_DATA_NOCOPY);
|
|
||||||
if (!lsmc)
|
if (!lsmc)
|
||||||
return;
|
return;
|
||||||
lsmc->clcsk_data_ready(listen_clcsock);
|
lsmc->clcsk_data_ready(listen_clcsock);
|
||||||
|
|||||||
@@ -129,6 +129,12 @@ enum smc_urg_state {
|
|||||||
SMC_URG_READ = 3, /* data was already read */
|
SMC_URG_READ = 3, /* data was already read */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct smc_mark_woken {
|
||||||
|
bool woken;
|
||||||
|
void *key;
|
||||||
|
wait_queue_entry_t wait_entry;
|
||||||
|
};
|
||||||
|
|
||||||
struct smc_connection {
|
struct smc_connection {
|
||||||
struct rb_node alert_node;
|
struct rb_node alert_node;
|
||||||
struct smc_link_group *lgr; /* link group of connection */
|
struct smc_link_group *lgr; /* link group of connection */
|
||||||
@@ -217,8 +223,14 @@ struct smc_connection {
|
|||||||
struct smc_sock { /* smc sock container */
|
struct smc_sock { /* smc sock container */
|
||||||
struct sock sk;
|
struct sock sk;
|
||||||
struct socket *clcsock; /* internal tcp socket */
|
struct socket *clcsock; /* internal tcp socket */
|
||||||
|
void (*clcsk_state_change)(struct sock *sk);
|
||||||
|
/* original stat_change fct. */
|
||||||
void (*clcsk_data_ready)(struct sock *sk);
|
void (*clcsk_data_ready)(struct sock *sk);
|
||||||
/* original data_ready fct. **/
|
/* original data_ready fct. */
|
||||||
|
void (*clcsk_write_space)(struct sock *sk);
|
||||||
|
/* original write_space fct. */
|
||||||
|
void (*clcsk_error_report)(struct sock *sk);
|
||||||
|
/* original error_report fct. */
|
||||||
struct smc_connection conn; /* smc connection */
|
struct smc_connection conn; /* smc connection */
|
||||||
struct smc_sock *listen_smc; /* listen parent */
|
struct smc_sock *listen_smc; /* listen parent */
|
||||||
struct work_struct connect_work; /* handle non-blocking connect*/
|
struct work_struct connect_work; /* handle non-blocking connect*/
|
||||||
@@ -253,6 +265,12 @@ static inline struct smc_sock *smc_sk(const struct sock *sk)
|
|||||||
return (struct smc_sock *)sk;
|
return (struct smc_sock *)sk;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline struct smc_sock *smc_clcsock_user_data(struct sock *clcsk)
|
||||||
|
{
|
||||||
|
return (struct smc_sock *)
|
||||||
|
((uintptr_t)clcsk->sk_user_data & ~SK_USER_DATA_NOCOPY);
|
||||||
|
}
|
||||||
|
|
||||||
extern struct workqueue_struct *smc_hs_wq; /* wq for handshake work */
|
extern struct workqueue_struct *smc_hs_wq; /* wq for handshake work */
|
||||||
extern struct workqueue_struct *smc_close_wq; /* wq for close work */
|
extern struct workqueue_struct *smc_close_wq; /* wq for close work */
|
||||||
|
|
||||||
|
|||||||
@@ -152,6 +152,8 @@ static void cond_list_destroy(struct policydb *p)
|
|||||||
for (i = 0; i < p->cond_list_len; i++)
|
for (i = 0; i < p->cond_list_len; i++)
|
||||||
cond_node_destroy(&p->cond_list[i]);
|
cond_node_destroy(&p->cond_list[i]);
|
||||||
kfree(p->cond_list);
|
kfree(p->cond_list);
|
||||||
|
p->cond_list = NULL;
|
||||||
|
p->cond_list_len = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void cond_policydb_destroy(struct policydb *p)
|
void cond_policydb_destroy(struct policydb *p)
|
||||||
@@ -441,7 +443,6 @@ int cond_read_list(struct policydb *p, void *fp)
|
|||||||
return 0;
|
return 0;
|
||||||
err:
|
err:
|
||||||
cond_list_destroy(p);
|
cond_list_destroy(p);
|
||||||
p->cond_list = NULL;
|
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -985,7 +985,7 @@ void snd_hda_pick_fixup(struct hda_codec *codec,
|
|||||||
int id = HDA_FIXUP_ID_NOT_SET;
|
int id = HDA_FIXUP_ID_NOT_SET;
|
||||||
const char *name = NULL;
|
const char *name = NULL;
|
||||||
const char *type = NULL;
|
const char *type = NULL;
|
||||||
int vendor, device;
|
unsigned int vendor, device;
|
||||||
|
|
||||||
if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)
|
if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -3000,6 +3000,10 @@ void snd_hda_codec_shutdown(struct hda_codec *codec)
|
|||||||
{
|
{
|
||||||
struct hda_pcm *cpcm;
|
struct hda_pcm *cpcm;
|
||||||
|
|
||||||
|
/* Skip the shutdown if codec is not registered */
|
||||||
|
if (!codec->registered)
|
||||||
|
return;
|
||||||
|
|
||||||
list_for_each_entry(cpcm, &codec->pcm_list_head, list)
|
list_for_each_entry(cpcm, &codec->pcm_list_head, list)
|
||||||
snd_pcm_suspend_all(cpcm->pcm);
|
snd_pcm_suspend_all(cpcm->pcm);
|
||||||
|
|
||||||
|
|||||||
@@ -91,6 +91,12 @@ static void snd_hda_gen_spec_free(struct hda_gen_spec *spec)
|
|||||||
free_kctls(spec);
|
free_kctls(spec);
|
||||||
snd_array_free(&spec->paths);
|
snd_array_free(&spec->paths);
|
||||||
snd_array_free(&spec->loopback_list);
|
snd_array_free(&spec->loopback_list);
|
||||||
|
#ifdef CONFIG_SND_HDA_GENERIC_LEDS
|
||||||
|
if (spec->led_cdevs[LED_AUDIO_MUTE])
|
||||||
|
led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MUTE]);
|
||||||
|
if (spec->led_cdevs[LED_AUDIO_MICMUTE])
|
||||||
|
led_classdev_unregister(spec->led_cdevs[LED_AUDIO_MICMUTE]);
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -3922,7 +3928,10 @@ static int create_mute_led_cdev(struct hda_codec *codec,
|
|||||||
enum led_brightness),
|
enum led_brightness),
|
||||||
bool micmute)
|
bool micmute)
|
||||||
{
|
{
|
||||||
|
struct hda_gen_spec *spec = codec->spec;
|
||||||
struct led_classdev *cdev;
|
struct led_classdev *cdev;
|
||||||
|
int idx = micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE;
|
||||||
|
int err;
|
||||||
|
|
||||||
cdev = devm_kzalloc(&codec->core.dev, sizeof(*cdev), GFP_KERNEL);
|
cdev = devm_kzalloc(&codec->core.dev, sizeof(*cdev), GFP_KERNEL);
|
||||||
if (!cdev)
|
if (!cdev)
|
||||||
@@ -3932,10 +3941,14 @@ static int create_mute_led_cdev(struct hda_codec *codec,
|
|||||||
cdev->max_brightness = 1;
|
cdev->max_brightness = 1;
|
||||||
cdev->default_trigger = micmute ? "audio-micmute" : "audio-mute";
|
cdev->default_trigger = micmute ? "audio-micmute" : "audio-mute";
|
||||||
cdev->brightness_set_blocking = callback;
|
cdev->brightness_set_blocking = callback;
|
||||||
cdev->brightness = ledtrig_audio_get(micmute ? LED_AUDIO_MICMUTE : LED_AUDIO_MUTE);
|
cdev->brightness = ledtrig_audio_get(idx);
|
||||||
cdev->flags = LED_CORE_SUSPENDRESUME;
|
cdev->flags = LED_CORE_SUSPENDRESUME;
|
||||||
|
|
||||||
return devm_led_classdev_register(&codec->core.dev, cdev);
|
err = led_classdev_register(&codec->core.dev, cdev);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
|
spec->led_cdevs[idx] = cdev;
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -294,6 +294,9 @@ struct hda_gen_spec {
|
|||||||
struct hda_jack_callback *cb);
|
struct hda_jack_callback *cb);
|
||||||
void (*mic_autoswitch_hook)(struct hda_codec *codec,
|
void (*mic_autoswitch_hook)(struct hda_codec *codec,
|
||||||
struct hda_jack_callback *cb);
|
struct hda_jack_callback *cb);
|
||||||
|
|
||||||
|
/* leds */
|
||||||
|
struct led_classdev *led_cdevs[NUM_AUDIO_LEDS];
|
||||||
};
|
};
|
||||||
|
|
||||||
/* values for add_stereo_mix_input flag */
|
/* values for add_stereo_mix_input flag */
|
||||||
|
|||||||
@@ -97,6 +97,7 @@ struct alc_spec {
|
|||||||
unsigned int gpio_mic_led_mask;
|
unsigned int gpio_mic_led_mask;
|
||||||
struct alc_coef_led mute_led_coef;
|
struct alc_coef_led mute_led_coef;
|
||||||
struct alc_coef_led mic_led_coef;
|
struct alc_coef_led mic_led_coef;
|
||||||
|
struct mutex coef_mutex;
|
||||||
|
|
||||||
hda_nid_t headset_mic_pin;
|
hda_nid_t headset_mic_pin;
|
||||||
hda_nid_t headphone_mic_pin;
|
hda_nid_t headphone_mic_pin;
|
||||||
@@ -132,8 +133,8 @@ struct alc_spec {
|
|||||||
* COEF access helper functions
|
* COEF access helper functions
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
static int __alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
unsigned int coef_idx)
|
unsigned int coef_idx)
|
||||||
{
|
{
|
||||||
unsigned int val;
|
unsigned int val;
|
||||||
|
|
||||||
@@ -142,28 +143,61 @@ static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
|||||||
return val;
|
return val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int alc_read_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
|
unsigned int coef_idx)
|
||||||
|
{
|
||||||
|
struct alc_spec *spec = codec->spec;
|
||||||
|
unsigned int val;
|
||||||
|
|
||||||
|
mutex_lock(&spec->coef_mutex);
|
||||||
|
val = __alc_read_coefex_idx(codec, nid, coef_idx);
|
||||||
|
mutex_unlock(&spec->coef_mutex);
|
||||||
|
return val;
|
||||||
|
}
|
||||||
|
|
||||||
#define alc_read_coef_idx(codec, coef_idx) \
|
#define alc_read_coef_idx(codec, coef_idx) \
|
||||||
alc_read_coefex_idx(codec, 0x20, coef_idx)
|
alc_read_coefex_idx(codec, 0x20, coef_idx)
|
||||||
|
|
||||||
static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
static void __alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
unsigned int coef_idx, unsigned int coef_val)
|
unsigned int coef_idx, unsigned int coef_val)
|
||||||
{
|
{
|
||||||
snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_COEF_INDEX, coef_idx);
|
snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_COEF_INDEX, coef_idx);
|
||||||
snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PROC_COEF, coef_val);
|
snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PROC_COEF, coef_val);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void alc_write_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
|
unsigned int coef_idx, unsigned int coef_val)
|
||||||
|
{
|
||||||
|
struct alc_spec *spec = codec->spec;
|
||||||
|
|
||||||
|
mutex_lock(&spec->coef_mutex);
|
||||||
|
__alc_write_coefex_idx(codec, nid, coef_idx, coef_val);
|
||||||
|
mutex_unlock(&spec->coef_mutex);
|
||||||
|
}
|
||||||
|
|
||||||
#define alc_write_coef_idx(codec, coef_idx, coef_val) \
|
#define alc_write_coef_idx(codec, coef_idx, coef_val) \
|
||||||
alc_write_coefex_idx(codec, 0x20, coef_idx, coef_val)
|
alc_write_coefex_idx(codec, 0x20, coef_idx, coef_val)
|
||||||
|
|
||||||
|
static void __alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
|
unsigned int coef_idx, unsigned int mask,
|
||||||
|
unsigned int bits_set)
|
||||||
|
{
|
||||||
|
unsigned int val = __alc_read_coefex_idx(codec, nid, coef_idx);
|
||||||
|
|
||||||
|
if (val != -1)
|
||||||
|
__alc_write_coefex_idx(codec, nid, coef_idx,
|
||||||
|
(val & ~mask) | bits_set);
|
||||||
|
}
|
||||||
|
|
||||||
static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
static void alc_update_coefex_idx(struct hda_codec *codec, hda_nid_t nid,
|
||||||
unsigned int coef_idx, unsigned int mask,
|
unsigned int coef_idx, unsigned int mask,
|
||||||
unsigned int bits_set)
|
unsigned int bits_set)
|
||||||
{
|
{
|
||||||
unsigned int val = alc_read_coefex_idx(codec, nid, coef_idx);
|
struct alc_spec *spec = codec->spec;
|
||||||
|
|
||||||
if (val != -1)
|
mutex_lock(&spec->coef_mutex);
|
||||||
alc_write_coefex_idx(codec, nid, coef_idx,
|
__alc_update_coefex_idx(codec, nid, coef_idx, mask, bits_set);
|
||||||
(val & ~mask) | bits_set);
|
mutex_unlock(&spec->coef_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define alc_update_coef_idx(codec, coef_idx, mask, bits_set) \
|
#define alc_update_coef_idx(codec, coef_idx, mask, bits_set) \
|
||||||
@@ -196,13 +230,17 @@ struct coef_fw {
|
|||||||
static void alc_process_coef_fw(struct hda_codec *codec,
|
static void alc_process_coef_fw(struct hda_codec *codec,
|
||||||
const struct coef_fw *fw)
|
const struct coef_fw *fw)
|
||||||
{
|
{
|
||||||
|
struct alc_spec *spec = codec->spec;
|
||||||
|
|
||||||
|
mutex_lock(&spec->coef_mutex);
|
||||||
for (; fw->nid; fw++) {
|
for (; fw->nid; fw++) {
|
||||||
if (fw->mask == (unsigned short)-1)
|
if (fw->mask == (unsigned short)-1)
|
||||||
alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
|
__alc_write_coefex_idx(codec, fw->nid, fw->idx, fw->val);
|
||||||
else
|
else
|
||||||
alc_update_coefex_idx(codec, fw->nid, fw->idx,
|
__alc_update_coefex_idx(codec, fw->nid, fw->idx,
|
||||||
fw->mask, fw->val);
|
fw->mask, fw->val);
|
||||||
}
|
}
|
||||||
|
mutex_unlock(&spec->coef_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@@ -1148,6 +1186,7 @@ static int alc_alloc_spec(struct hda_codec *codec, hda_nid_t mixer_nid)
|
|||||||
codec->spdif_status_reset = 1;
|
codec->spdif_status_reset = 1;
|
||||||
codec->forced_resume = 1;
|
codec->forced_resume = 1;
|
||||||
codec->patch_ops = alc_patch_ops;
|
codec->patch_ops = alc_patch_ops;
|
||||||
|
mutex_init(&spec->coef_mutex);
|
||||||
|
|
||||||
err = alc_codec_rename_from_preset(codec);
|
err = alc_codec_rename_from_preset(codec);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
@@ -2120,6 +2159,7 @@ static void alc1220_fixup_gb_x570(struct hda_codec *codec,
|
|||||||
{
|
{
|
||||||
static const hda_nid_t conn1[] = { 0x0c };
|
static const hda_nid_t conn1[] = { 0x0c };
|
||||||
static const struct coef_fw gb_x570_coefs[] = {
|
static const struct coef_fw gb_x570_coefs[] = {
|
||||||
|
WRITE_COEF(0x07, 0x03c0),
|
||||||
WRITE_COEF(0x1a, 0x01c1),
|
WRITE_COEF(0x1a, 0x01c1),
|
||||||
WRITE_COEF(0x1b, 0x0202),
|
WRITE_COEF(0x1b, 0x0202),
|
||||||
WRITE_COEF(0x43, 0x3005),
|
WRITE_COEF(0x43, 0x3005),
|
||||||
@@ -2546,7 +2586,8 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
|
|||||||
SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
|
SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
|
||||||
SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
|
SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
|
||||||
SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
|
SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_GB_X570),
|
||||||
SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
|
SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_GB_X570),
|
||||||
|
SND_PCI_QUIRK(0x1458, 0xa0d5, "Gigabyte X570S Aorus Master", ALC1220_FIXUP_GB_X570),
|
||||||
SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
|
SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
|
||||||
SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
|
SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
|
||||||
SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950),
|
SND_PCI_QUIRK(0x1462, 0x1229, "MSI-GP73", ALC1220_FIXUP_CLEVO_P950),
|
||||||
@@ -2621,6 +2662,7 @@ static const struct hda_model_fixup alc882_fixup_models[] = {
|
|||||||
{.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"},
|
{.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"},
|
||||||
{.id = ALC887_FIXUP_ASUS_BASS, .name = "asus-bass"},
|
{.id = ALC887_FIXUP_ASUS_BASS, .name = "asus-bass"},
|
||||||
{.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"},
|
{.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"},
|
||||||
|
{.id = ALC1220_FIXUP_GB_X570, .name = "gb-x570"},
|
||||||
{.id = ALC1220_FIXUP_CLEVO_P950, .name = "clevo-p950"},
|
{.id = ALC1220_FIXUP_CLEVO_P950, .name = "clevo-p950"},
|
||||||
{}
|
{}
|
||||||
};
|
};
|
||||||
@@ -8815,6 +8857,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
|
|||||||
SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
|
SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
|
||||||
SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
|
SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
|
||||||
SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
|
SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
|
||||||
|
SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
|
||||||
SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
|
SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
|
||||||
SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
|
SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
|
||||||
SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
|
SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
|
||||||
|
|||||||
@@ -1667,6 +1667,8 @@ static int cpcap_codec_probe(struct platform_device *pdev)
|
|||||||
{
|
{
|
||||||
struct device_node *codec_node =
|
struct device_node *codec_node =
|
||||||
of_get_child_by_name(pdev->dev.parent->of_node, "audio-codec");
|
of_get_child_by_name(pdev->dev.parent->of_node, "audio-codec");
|
||||||
|
if (!codec_node)
|
||||||
|
return -ENODEV;
|
||||||
|
|
||||||
pdev->dev.of_node = codec_node;
|
pdev->dev.of_node = codec_node;
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user