Merge 5.10.44 into android12-5.10-lts
Changes in 5.10.44 proc: Track /proc/$pid/attr/ opener mm_struct ASoC: max98088: fix ni clock divider calculation ASoC: amd: fix for pcm_read() error spi: Fix spi device unregister flow spi: spi-zynq-qspi: Fix stack violation bug bpf: Forbid trampoline attach for functions with variable arguments net/nfc/rawsock.c: fix a permission check bug usb: cdns3: Fix runtime PM imbalance on error ASoC: Intel: bytcr_rt5640: Add quirk for the Glavey TM800A550L tablet ASoC: Intel: bytcr_rt5640: Add quirk for the Lenovo Miix 3-830 tablet vfio-ccw: Reset FSM state to IDLE inside FSM vfio-ccw: Serialize FSM IDLE state with I/O completion ASoC: sti-sas: add missing MODULE_DEVICE_TABLE spi: sprd: Add missing MODULE_DEVICE_TABLE usb: chipidea: udc: assign interrupt number to USB gadget structure isdn: mISDN: netjet: Fix crash in nj_probe: bonding: init notify_work earlier to avoid uninitialized use netlink: disable IRQs for netlink_lock_table() net: mdiobus: get rid of a BUG_ON() cgroup: disable controllers at parse time wq: handle VM suspension in stall detection net/qla3xxx: fix schedule while atomic in ql_sem_spinlock RDS tcp loopback connection can hang net:sfc: fix non-freed irq in legacy irq mode scsi: bnx2fc: Return failure if io_req is already in ABTS processing scsi: vmw_pvscsi: Set correct residual data length scsi: hisi_sas: Drop free_irq() of devm_request_irq() allocated irq scsi: target: qla2xxx: Wait for stop_phase1 at WWN removal net: macb: ensure the device is available before accessing GEMGXL control registers net: appletalk: cops: Fix data race in cops_probe1 net: dsa: microchip: enable phy errata workaround on 9567 nvme-fabrics: decode host pathing error for connect MIPS: Fix kernel hang under FUNCTION_GRAPH_TRACER and PREEMPT_TRACER dm verity: fix require_signatures module_param permissions bnx2x: Fix missing error code in bnx2x_iov_init_one() nvme-tcp: remove incorrect Kconfig dep in BLK_DEV_NVME nvmet: fix false keep-alive timeout when a controller is torn down powerpc/fsl: set fsl,i2c-erratum-a004447 flag for P2041 i2c controllers powerpc/fsl: set fsl,i2c-erratum-a004447 flag for P1010 i2c controllers spi: Don't have controller clean up spi device before driver unbind spi: Cleanup on failure of initial setup i2c: mpc: Make use of i2c_recover_bus() i2c: mpc: implement erratum A-004447 workaround ALSA: seq: Fix race of snd_seq_timer_open() ALSA: firewire-lib: fix the context to call snd_pcm_stop_xrun() ALSA: hda/realtek: headphone and mic don't work on an Acer laptop ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Elite Dragonfly G2 ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP EliteBook x360 1040 G8 ALSA: hda/realtek: fix mute/micmute LEDs for HP EliteBook 840 Aero G8 ALSA: hda/realtek: fix mute/micmute LEDs for HP ZBook Power G8 spi: bcm2835: Fix out-of-bounds access with more than 4 slaves Revert "ACPI: sleep: Put the FACS table after using it" drm: Fix use-after-free read in drm_getunique() drm: Lock pointer access in drm_master_release() perf/x86/intel/uncore: Fix M2M event umask for Ice Lake server KVM: X86: MMU: Use the correct inherited permissions to get shadow page kvm: avoid speculation-based attacks from out-of-range memslot accesses staging: rtl8723bs: Fix uninitialized variables async_xor: check src_offs is not NULL before updating it btrfs: return value from btrfs_mark_extent_written() in case of error btrfs: promote debugging asserts to full-fledged checks in validate_super cgroup1: don't allow '\n' in renaming ftrace: Do not blindly read the ip address in ftrace_bug() mmc: renesas_sdhi: abort tuning when timeout detected mmc: renesas_sdhi: Fix HS400 on R-Car M3-W+ USB: f_ncm: ncm_bitrate (speed) is unsigned usb: f_ncm: only first packet of aggregate needs to start timer usb: pd: Set PD_T_SINK_WAIT_CAP to 310ms usb: dwc3-meson-g12a: fix usb2 PHY glue init when phy0 is disabled usb: dwc3: meson-g12a: Disable the regulator in the error handling path of the probe usb: dwc3: gadget: Bail from dwc3_gadget_exit() if dwc->gadget is NULL usb: dwc3: ep0: fix NULL pointer exception usb: musb: fix MUSB_QUIRK_B_DISCONNECT_99 handling usb: typec: wcove: Use LE to CPU conversion when accessing msg->header usb: typec: ucsi: Clear PPM capability data in ucsi_init() error path usb: typec: intel_pmc_mux: Put fwnode in error case during ->probe() usb: typec: intel_pmc_mux: Add missed error check for devm_ioremap_resource() usb: gadget: f_fs: Ensure io_completion_wq is idle during unbind USB: serial: ftdi_sio: add NovaTech OrionMX product ID USB: serial: omninet: add device id for Zyxel Omni 56K Plus USB: serial: quatech2: fix control-request directions USB: serial: cp210x: fix alternate function for CP2102N QFN20 usb: gadget: eem: fix wrong eem header operation usb: fix various gadgets null ptr deref on 10gbps cabling. usb: fix various gadget panics on 10gbps cabling usb: typec: tcpm: cancel vdm and state machine hrtimer when unregister tcpm port usb: typec: tcpm: cancel frs hrtimer when unregister tcpm port regulator: core: resolve supply for boot-on/always-on regulators regulator: max77620: Use device_set_of_node_from_dev() regulator: bd718x7: Fix the BUCK7 voltage setting on BD71837 regulator: fan53880: Fix missing n_voltages setting regulator: bd71828: Fix .n_voltages settings regulator: rtmv20: Fix .set_current_limit/.get_current_limit callbacks phy: usb: Fix misuse of IS_ENABLED usb: dwc3: gadget: Disable gadget IRQ during pullup disable usb: typec: mux: Fix copy-paste mistake in typec_mux_match drm/mcde: Fix off by 10^3 in calculation drm/msm/a6xx: fix incorrectly set uavflagprd_inv field for A650 drm/msm/a6xx: update/fix CP_PROTECT initialization drm/msm/a6xx: avoid shadow NULL reference in failure path RDMA/ipoib: Fix warning caused by destroying non-initial netns RDMA/mlx4: Do not map the core_clock page to user space unless enabled ARM: cpuidle: Avoid orphan section warning vmlinux.lds.h: Avoid orphan section with !SMP tools/bootconfig: Fix error return code in apply_xbc() phy: cadence: Sierra: Fix error return code in cdns_sierra_phy_probe() ASoC: core: Fix Null-point-dereference in fmt_single_name() ASoC: meson: gx-card: fix sound-dai dt schema phy: ti: Fix an error code in wiz_probe() gpio: wcd934x: Fix shift-out-of-bounds error perf: Fix data race between pin_count increment/decrement sched/fair: Keep load_avg and load_sum synced sched/fair: Make sure to update tg contrib for blocked load sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling x86/nmi_watchdog: Fix old-style NMI watchdog regression on old Intel CPUs KVM: x86: Ensure liveliness of nested VM-Enter fail tracepoint message IB/mlx5: Fix initializing CQ fragments buffer NFS: Fix a potential NULL dereference in nfs_get_client() NFSv4: Fix deadlock between nfs4_evict_inode() and nfs4_opendata_get_inode() perf session: Correct buffer copying when peeking events kvm: fix previous commit for 32-bit builds NFS: Fix use-after-free in nfs4_init_client() NFSv4: Fix second deadlock in nfs4_evict_inode() NFSv4: nfs4_proc_set_acl needs to restore NFS_CAP_UIDGID_NOMAP on error. scsi: core: Fix error handling of scsi_host_alloc() scsi: core: Fix failure handling of scsi_add_host_with_dma() scsi: core: Put .shost_dev in failure path if host state changes to RUNNING scsi: core: Only put parent device if host state differs from SHOST_CREATED tracing: Correct the length check which causes memory corruption proc: only require mm_struct for writing Linux 5.10.44 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ic64172b4e72ccb54d96000b3065dd8b33aa9fef5
This commit is contained in:
@@ -57,7 +57,7 @@ patternProperties:
|
||||
rate
|
||||
|
||||
sound-dai:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description: phandle of the CPU DAI
|
||||
|
||||
patternProperties:
|
||||
@@ -71,7 +71,7 @@ patternProperties:
|
||||
|
||||
properties:
|
||||
sound-dai:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description: phandle of the codec DAI
|
||||
|
||||
required:
|
||||
|
||||
@@ -171,8 +171,8 @@ Shadow pages contain the following information:
|
||||
shadow pages) so role.quadrant takes values in the range 0..3. Each
|
||||
quadrant maps 1GB virtual address space.
|
||||
role.access:
|
||||
Inherited guest access permissions in the form uwx. Note execute
|
||||
permission is positive, not negative.
|
||||
Inherited guest access permissions from the parent ptes in the form uwx.
|
||||
Note execute permission is positive, not negative.
|
||||
role.invalid:
|
||||
The page is invalid and should not be used. It is a root page that is
|
||||
currently pinned (by a cpu hardware register pointing to it); once it is
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 43
|
||||
SUBLEVEL = 44
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
||||
@@ -7,9 +7,11 @@
|
||||
#ifdef CONFIG_CPU_IDLE
|
||||
extern int arm_cpuidle_simple_enter(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int index);
|
||||
#define __cpuidle_method_section __used __section("__cpuidle_method_of_table")
|
||||
#else
|
||||
static inline int arm_cpuidle_simple_enter(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int index) { return -ENODEV; }
|
||||
#define __cpuidle_method_section __maybe_unused /* drop silently */
|
||||
#endif
|
||||
|
||||
/* Common ARM WFI state */
|
||||
@@ -42,8 +44,7 @@ struct of_cpuidle_method {
|
||||
|
||||
#define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops) \
|
||||
static const struct of_cpuidle_method __cpuidle_method_of_table_##name \
|
||||
__used __section("__cpuidle_method_of_table") \
|
||||
= { .method = _method, .ops = _ops }
|
||||
__cpuidle_method_section = { .method = _method, .ops = _ops }
|
||||
|
||||
extern int arm_cpuidle_suspend(int index);
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@
|
||||
*/
|
||||
notrace void arch_local_irq_disable(void)
|
||||
{
|
||||
preempt_disable();
|
||||
preempt_disable_notrace();
|
||||
|
||||
__asm__ __volatile__(
|
||||
" .set push \n"
|
||||
@@ -53,7 +53,7 @@ notrace void arch_local_irq_disable(void)
|
||||
: /* no inputs */
|
||||
: "memory");
|
||||
|
||||
preempt_enable();
|
||||
preempt_enable_notrace();
|
||||
}
|
||||
EXPORT_SYMBOL(arch_local_irq_disable);
|
||||
|
||||
@@ -61,7 +61,7 @@ notrace unsigned long arch_local_irq_save(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
preempt_disable();
|
||||
preempt_disable_notrace();
|
||||
|
||||
__asm__ __volatile__(
|
||||
" .set push \n"
|
||||
@@ -78,7 +78,7 @@ notrace unsigned long arch_local_irq_save(void)
|
||||
: /* no inputs */
|
||||
: "memory");
|
||||
|
||||
preempt_enable();
|
||||
preempt_enable_notrace();
|
||||
|
||||
return flags;
|
||||
}
|
||||
@@ -88,7 +88,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
|
||||
{
|
||||
unsigned long __tmp1;
|
||||
|
||||
preempt_disable();
|
||||
preempt_disable_notrace();
|
||||
|
||||
__asm__ __volatile__(
|
||||
" .set push \n"
|
||||
@@ -106,7 +106,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
|
||||
: "0" (flags)
|
||||
: "memory");
|
||||
|
||||
preempt_enable();
|
||||
preempt_enable_notrace();
|
||||
}
|
||||
EXPORT_SYMBOL(arch_local_irq_restore);
|
||||
|
||||
|
||||
@@ -122,7 +122,15 @@
|
||||
};
|
||||
|
||||
/include/ "pq3-i2c-0.dtsi"
|
||||
i2c@3000 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
/include/ "pq3-i2c-1.dtsi"
|
||||
i2c@3100 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
/include/ "pq3-duart-0.dtsi"
|
||||
/include/ "pq3-espi-0.dtsi"
|
||||
spi0: spi@7000 {
|
||||
|
||||
@@ -371,7 +371,23 @@
|
||||
};
|
||||
|
||||
/include/ "qoriq-i2c-0.dtsi"
|
||||
i2c@118000 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
i2c@118100 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
/include/ "qoriq-i2c-1.dtsi"
|
||||
i2c@119000 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
i2c@119100 {
|
||||
fsl,i2c-erratum-a004447;
|
||||
};
|
||||
|
||||
/include/ "qoriq-duart-0.dtsi"
|
||||
/include/ "qoriq-duart-1.dtsi"
|
||||
/include/ "qoriq-gpio-0.dtsi"
|
||||
|
||||
@@ -5067,9 +5067,10 @@ static struct intel_uncore_type icx_uncore_m2m = {
|
||||
.perf_ctr = SNR_M2M_PCI_PMON_CTR0,
|
||||
.event_ctl = SNR_M2M_PCI_PMON_CTL0,
|
||||
.event_mask = SNBEP_PMON_RAW_EVENT_MASK,
|
||||
.event_mask_ext = SNR_M2M_PCI_PMON_UMASK_EXT,
|
||||
.box_ctl = SNR_M2M_PCI_PMON_BOX_CTL,
|
||||
.ops = &snr_m2m_uncore_pci_ops,
|
||||
.format_group = &skx_uncore_format_group,
|
||||
.format_group = &snr_m2m_uncore_format_group,
|
||||
};
|
||||
|
||||
static struct attribute *icx_upi_uncore_formats_attr[] = {
|
||||
|
||||
@@ -63,7 +63,7 @@ static inline unsigned int nmi_perfctr_msr_to_bit(unsigned int msr)
|
||||
case 15:
|
||||
return msr - MSR_P4_BPU_PERFCTR0;
|
||||
}
|
||||
fallthrough;
|
||||
break;
|
||||
case X86_VENDOR_ZHAOXIN:
|
||||
case X86_VENDOR_CENTAUR:
|
||||
return msr - MSR_ARCH_PERFMON_PERFCTR0;
|
||||
@@ -96,7 +96,7 @@ static inline unsigned int nmi_evntsel_msr_to_bit(unsigned int msr)
|
||||
case 15:
|
||||
return msr - MSR_P4_BSU_ESCR0;
|
||||
}
|
||||
fallthrough;
|
||||
break;
|
||||
case X86_VENDOR_ZHAOXIN:
|
||||
case X86_VENDOR_CENTAUR:
|
||||
return msr - MSR_ARCH_PERFMON_EVENTSEL0;
|
||||
|
||||
@@ -90,8 +90,8 @@ struct guest_walker {
|
||||
gpa_t pte_gpa[PT_MAX_FULL_LEVELS];
|
||||
pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS];
|
||||
bool pte_writable[PT_MAX_FULL_LEVELS];
|
||||
unsigned pt_access;
|
||||
unsigned pte_access;
|
||||
unsigned int pt_access[PT_MAX_FULL_LEVELS];
|
||||
unsigned int pte_access;
|
||||
gfn_t gfn;
|
||||
struct x86_exception fault;
|
||||
};
|
||||
@@ -418,13 +418,15 @@ retry_walk:
|
||||
}
|
||||
|
||||
walker->ptes[walker->level - 1] = pte;
|
||||
|
||||
/* Convert to ACC_*_MASK flags for struct guest_walker. */
|
||||
walker->pt_access[walker->level - 1] = FNAME(gpte_access)(pt_access ^ walk_nx_mask);
|
||||
} while (!is_last_gpte(mmu, walker->level, pte));
|
||||
|
||||
pte_pkey = FNAME(gpte_pkeys)(vcpu, pte);
|
||||
accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0;
|
||||
|
||||
/* Convert to ACC_*_MASK flags for struct guest_walker. */
|
||||
walker->pt_access = FNAME(gpte_access)(pt_access ^ walk_nx_mask);
|
||||
walker->pte_access = FNAME(gpte_access)(pte_access ^ walk_nx_mask);
|
||||
errcode = permission_fault(vcpu, mmu, walker->pte_access, pte_pkey, access);
|
||||
if (unlikely(errcode))
|
||||
@@ -463,7 +465,8 @@ retry_walk:
|
||||
}
|
||||
|
||||
pgprintk("%s: pte %llx pte_access %x pt_access %x\n",
|
||||
__func__, (u64)pte, walker->pte_access, walker->pt_access);
|
||||
__func__, (u64)pte, walker->pte_access,
|
||||
walker->pt_access[walker->level - 1]);
|
||||
return 1;
|
||||
|
||||
error:
|
||||
@@ -635,7 +638,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
|
||||
bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
|
||||
struct kvm_mmu_page *sp = NULL;
|
||||
struct kvm_shadow_walk_iterator it;
|
||||
unsigned direct_access, access = gw->pt_access;
|
||||
unsigned int direct_access, access;
|
||||
int top_level, level, req_level, ret;
|
||||
gfn_t base_gfn = gw->gfn;
|
||||
|
||||
@@ -667,6 +670,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
|
||||
sp = NULL;
|
||||
if (!is_shadow_present_pte(*it.sptep)) {
|
||||
table_gfn = gw->table_gfn[it.level - 2];
|
||||
access = gw->pt_access[it.level - 2];
|
||||
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
|
||||
false, access);
|
||||
}
|
||||
|
||||
@@ -1514,16 +1514,16 @@ TRACE_EVENT(kvm_nested_vmenter_failed,
|
||||
TP_ARGS(msg, err),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(const char *, msg)
|
||||
__string(msg, msg)
|
||||
__field(u32, err)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->msg = msg;
|
||||
__assign_str(msg, msg);
|
||||
__entry->err = err;
|
||||
),
|
||||
|
||||
TP_printk("%s%s", __entry->msg, !__entry->err ? "" :
|
||||
TP_printk("%s%s", __get_str(msg), !__entry->err ? "" :
|
||||
__print_symbolic(__entry->err, VMX_VMENTER_INSTRUCTION_ERRORS))
|
||||
);
|
||||
|
||||
|
||||
@@ -233,7 +233,8 @@ async_xor_offs(struct page *dest, unsigned int offset,
|
||||
if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
|
||||
src_cnt--;
|
||||
src_list++;
|
||||
src_offs++;
|
||||
if (src_offs)
|
||||
src_offs++;
|
||||
}
|
||||
|
||||
/* wait for any prerequisite operations */
|
||||
|
||||
@@ -1290,10 +1290,8 @@ static void acpi_sleep_hibernate_setup(void)
|
||||
return;
|
||||
|
||||
acpi_get_table(ACPI_SIG_FACS, 1, (struct acpi_table_header **)&facs);
|
||||
if (facs) {
|
||||
if (facs)
|
||||
s4_hardware_signature = facs->hardware_signature;
|
||||
acpi_put_table((struct acpi_table_header *)facs);
|
||||
}
|
||||
}
|
||||
#else /* !CONFIG_HIBERNATION */
|
||||
static inline void acpi_sleep_hibernate_setup(void) {}
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/of_device.h>
|
||||
|
||||
#define WCD_PIN_MASK(p) BIT(p - 1)
|
||||
#define WCD_PIN_MASK(p) BIT(p)
|
||||
#define WCD_REG_DIR_CTL_OFFSET 0x42
|
||||
#define WCD_REG_VAL_CTL_OFFSET 0x43
|
||||
#define WCD934X_NPINS 5
|
||||
|
||||
@@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv)
|
||||
void drm_master_release(struct drm_file *file_priv)
|
||||
{
|
||||
struct drm_device *dev = file_priv->minor->dev;
|
||||
struct drm_master *master = file_priv->master;
|
||||
struct drm_master *master;
|
||||
|
||||
mutex_lock(&dev->master_mutex);
|
||||
master = file_priv->master;
|
||||
if (file_priv->magic)
|
||||
idr_remove(&file_priv->master->magic_map, file_priv->magic);
|
||||
|
||||
|
||||
@@ -118,17 +118,18 @@ int drm_getunique(struct drm_device *dev, void *data,
|
||||
struct drm_file *file_priv)
|
||||
{
|
||||
struct drm_unique *u = data;
|
||||
struct drm_master *master = file_priv->master;
|
||||
struct drm_master *master;
|
||||
|
||||
mutex_lock(&master->dev->master_mutex);
|
||||
mutex_lock(&dev->master_mutex);
|
||||
master = file_priv->master;
|
||||
if (u->unique_len >= master->unique_len) {
|
||||
if (copy_to_user(u->unique, master->unique, master->unique_len)) {
|
||||
mutex_unlock(&master->dev->master_mutex);
|
||||
mutex_unlock(&dev->master_mutex);
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
u->unique_len = master->unique_len;
|
||||
mutex_unlock(&master->dev->master_mutex);
|
||||
mutex_unlock(&dev->master_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -577,7 +577,7 @@ static void mcde_dsi_setup_video_mode(struct mcde_dsi *d,
|
||||
* porches and sync.
|
||||
*/
|
||||
/* (ps/s) / (pixels/s) = ps/pixels */
|
||||
pclk = DIV_ROUND_UP_ULL(1000000000000, mode->clock);
|
||||
pclk = DIV_ROUND_UP_ULL(1000000000000, (mode->clock * 1000));
|
||||
dev_dbg(d->dev, "picoseconds between two pixels: %llu\n",
|
||||
pclk);
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
|
||||
* GPU registers so we need to add 0x1a800 to the register value on A630
|
||||
* to get the right value from PM4.
|
||||
*/
|
||||
get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,
|
||||
get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
|
||||
rbmemptr_stats(ring, index, alwayson_start));
|
||||
|
||||
/* Invalidate CCU depth and color */
|
||||
@@ -184,7 +184,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
|
||||
|
||||
get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO,
|
||||
rbmemptr_stats(ring, index, cpcycles_end));
|
||||
get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,
|
||||
get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
|
||||
rbmemptr_stats(ring, index, alwayson_end));
|
||||
|
||||
/* Write the fence to the scratch register */
|
||||
@@ -203,8 +203,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
|
||||
OUT_RING(ring, submit->seqno);
|
||||
|
||||
trace_msm_gpu_submit_flush(submit,
|
||||
gmu_read64(&a6xx_gpu->gmu, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L,
|
||||
REG_A6XX_GMU_ALWAYS_ON_COUNTER_H));
|
||||
gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
|
||||
REG_A6XX_CP_ALWAYS_ON_COUNTER_HI));
|
||||
|
||||
a6xx_flush(gpu, ring);
|
||||
}
|
||||
@@ -459,6 +459,113 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
|
||||
gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0);
|
||||
}
|
||||
|
||||
/* For a615, a616, a618, A619, a630, a640 and a680 */
|
||||
static const u32 a6xx_protect[] = {
|
||||
A6XX_PROTECT_RDONLY(0x00000, 0x04ff),
|
||||
A6XX_PROTECT_RDONLY(0x00501, 0x0005),
|
||||
A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),
|
||||
A6XX_PROTECT_NORDWR(0x0050e, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00510, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00534, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00800, 0x0082),
|
||||
A6XX_PROTECT_NORDWR(0x008a0, 0x0008),
|
||||
A6XX_PROTECT_NORDWR(0x008ab, 0x0024),
|
||||
A6XX_PROTECT_RDONLY(0x008de, 0x00ae),
|
||||
A6XX_PROTECT_NORDWR(0x00900, 0x004d),
|
||||
A6XX_PROTECT_NORDWR(0x0098d, 0x0272),
|
||||
A6XX_PROTECT_NORDWR(0x00e00, 0x0001),
|
||||
A6XX_PROTECT_NORDWR(0x00e03, 0x000c),
|
||||
A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),
|
||||
A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x08630, 0x01cf),
|
||||
A6XX_PROTECT_NORDWR(0x08e00, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x08e08, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x08e50, 0x001f),
|
||||
A6XX_PROTECT_NORDWR(0x09624, 0x01db),
|
||||
A6XX_PROTECT_NORDWR(0x09e70, 0x0001),
|
||||
A6XX_PROTECT_NORDWR(0x09e78, 0x0187),
|
||||
A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),
|
||||
A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),
|
||||
A6XX_PROTECT_NORDWR(0x0b604, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x0be02, 0x0001),
|
||||
A6XX_PROTECT_NORDWR(0x0be20, 0x17df),
|
||||
A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),
|
||||
A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x11c00, 0x0000), /* note: infinite range */
|
||||
};
|
||||
|
||||
/* These are for a620 and a650 */
|
||||
static const u32 a650_protect[] = {
|
||||
A6XX_PROTECT_RDONLY(0x00000, 0x04ff),
|
||||
A6XX_PROTECT_RDONLY(0x00501, 0x0005),
|
||||
A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),
|
||||
A6XX_PROTECT_NORDWR(0x0050e, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00510, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00534, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x00800, 0x0082),
|
||||
A6XX_PROTECT_NORDWR(0x008a0, 0x0008),
|
||||
A6XX_PROTECT_NORDWR(0x008ab, 0x0024),
|
||||
A6XX_PROTECT_RDONLY(0x008de, 0x00ae),
|
||||
A6XX_PROTECT_NORDWR(0x00900, 0x004d),
|
||||
A6XX_PROTECT_NORDWR(0x0098d, 0x0272),
|
||||
A6XX_PROTECT_NORDWR(0x00e00, 0x0001),
|
||||
A6XX_PROTECT_NORDWR(0x00e03, 0x000c),
|
||||
A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),
|
||||
A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x08630, 0x01cf),
|
||||
A6XX_PROTECT_NORDWR(0x08e00, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x08e08, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x08e50, 0x001f),
|
||||
A6XX_PROTECT_NORDWR(0x08e80, 0x027f),
|
||||
A6XX_PROTECT_NORDWR(0x09624, 0x01db),
|
||||
A6XX_PROTECT_NORDWR(0x09e60, 0x0011),
|
||||
A6XX_PROTECT_NORDWR(0x09e78, 0x0187),
|
||||
A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),
|
||||
A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),
|
||||
A6XX_PROTECT_NORDWR(0x0b604, 0x0000),
|
||||
A6XX_PROTECT_NORDWR(0x0b608, 0x0007),
|
||||
A6XX_PROTECT_NORDWR(0x0be02, 0x0001),
|
||||
A6XX_PROTECT_NORDWR(0x0be20, 0x17df),
|
||||
A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),
|
||||
A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x18400, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x1a800, 0x1fff),
|
||||
A6XX_PROTECT_NORDWR(0x1f400, 0x0443),
|
||||
A6XX_PROTECT_RDONLY(0x1f844, 0x007b),
|
||||
A6XX_PROTECT_NORDWR(0x1f887, 0x001b),
|
||||
A6XX_PROTECT_NORDWR(0x1f8c0, 0x0000), /* note: infinite range */
|
||||
};
|
||||
|
||||
static void a6xx_set_cp_protect(struct msm_gpu *gpu)
|
||||
{
|
||||
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
|
||||
const u32 *regs = a6xx_protect;
|
||||
unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32;
|
||||
|
||||
BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
|
||||
BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
|
||||
|
||||
if (adreno_is_a650(adreno_gpu)) {
|
||||
regs = a650_protect;
|
||||
count = ARRAY_SIZE(a650_protect);
|
||||
count_max = 48;
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable access protection to privileged registers, fault on an access
|
||||
* protect violation and select the last span to protect from the start
|
||||
* address all the way to the end of the register address space
|
||||
*/
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3));
|
||||
|
||||
for (i = 0; i < count - 1; i++)
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);
|
||||
/* last CP_PROTECT to have "infinite" length on the last entry */
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]);
|
||||
}
|
||||
|
||||
static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
|
||||
{
|
||||
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
|
||||
@@ -486,7 +593,7 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
|
||||
rgb565_predicator << 11 | amsbc << 4 | lower_bit << 1);
|
||||
gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, lower_bit << 1);
|
||||
gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL,
|
||||
uavflagprd_inv >> 4 | lower_bit << 1);
|
||||
uavflagprd_inv << 4 | lower_bit << 1);
|
||||
gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, lower_bit << 21);
|
||||
}
|
||||
|
||||
@@ -722,41 +829,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
|
||||
}
|
||||
|
||||
/* Protect registers from the CP */
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, 0x00000003);
|
||||
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(0),
|
||||
A6XX_PROTECT_RDONLY(0x600, 0x51));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(1), A6XX_PROTECT_RW(0xae50, 0x2));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(2), A6XX_PROTECT_RW(0x9624, 0x13));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(3), A6XX_PROTECT_RW(0x8630, 0x8));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(4), A6XX_PROTECT_RW(0x9e70, 0x1));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(5), A6XX_PROTECT_RW(0x9e78, 0x187));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(6), A6XX_PROTECT_RW(0xf000, 0x810));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(7),
|
||||
A6XX_PROTECT_RDONLY(0xfc00, 0x3));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(8), A6XX_PROTECT_RW(0x50e, 0x0));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(9), A6XX_PROTECT_RDONLY(0x50f, 0x0));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(10), A6XX_PROTECT_RW(0x510, 0x0));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(11),
|
||||
A6XX_PROTECT_RDONLY(0x0, 0x4f9));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(12),
|
||||
A6XX_PROTECT_RDONLY(0x501, 0xa));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(13),
|
||||
A6XX_PROTECT_RDONLY(0x511, 0x44));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(14), A6XX_PROTECT_RW(0xe00, 0xe));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(15), A6XX_PROTECT_RW(0x8e00, 0x0));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(16), A6XX_PROTECT_RW(0x8e50, 0xf));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(17), A6XX_PROTECT_RW(0xbe02, 0x0));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(18),
|
||||
A6XX_PROTECT_RW(0xbe20, 0x11f3));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(19), A6XX_PROTECT_RW(0x800, 0x82));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(20), A6XX_PROTECT_RW(0x8a0, 0x8));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(21), A6XX_PROTECT_RW(0x8ab, 0x19));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(22), A6XX_PROTECT_RW(0x900, 0x4d));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(23), A6XX_PROTECT_RW(0x98d, 0x76));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(24),
|
||||
A6XX_PROTECT_RDONLY(0x980, 0x4));
|
||||
gpu_write(gpu, REG_A6XX_CP_PROTECT(25), A6XX_PROTECT_RW(0xa630, 0x0));
|
||||
a6xx_set_cp_protect(gpu);
|
||||
|
||||
/* Enable expanded apriv for targets that support it */
|
||||
if (gpu->hw_apriv) {
|
||||
@@ -1055,7 +1128,7 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami)
|
||||
if (a6xx_gpu->shadow_bo)
|
||||
for (i = 0; i < gpu->nr_rings; i++)
|
||||
a6xx_gpu->shadow[i] = 0;
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ struct a6xx_gpu {
|
||||
* REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len
|
||||
* registers starting at _reg.
|
||||
*/
|
||||
#define A6XX_PROTECT_RW(_reg, _len) \
|
||||
#define A6XX_PROTECT_NORDWR(_reg, _len) \
|
||||
((1 << 31) | \
|
||||
(((_len) & 0x3FFF) << 18) | ((_reg) & 0x3FFFF))
|
||||
|
||||
|
||||
@@ -23,6 +23,7 @@
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/fsl_devices.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/interrupt.h>
|
||||
@@ -49,6 +50,7 @@
|
||||
#define CCR_MTX 0x10
|
||||
#define CCR_TXAK 0x08
|
||||
#define CCR_RSTA 0x04
|
||||
#define CCR_RSVD 0x02
|
||||
|
||||
#define CSR_MCF 0x80
|
||||
#define CSR_MAAS 0x40
|
||||
@@ -70,6 +72,7 @@ struct mpc_i2c {
|
||||
u8 fdr, dfsrr;
|
||||
#endif
|
||||
struct clk *clk_per;
|
||||
bool has_errata_A004447;
|
||||
};
|
||||
|
||||
struct mpc_i2c_divider {
|
||||
@@ -176,6 +179,75 @@ static int i2c_wait(struct mpc_i2c *i2c, unsigned timeout, int writing)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int i2c_mpc_wait_sr(struct mpc_i2c *i2c, int mask)
|
||||
{
|
||||
void __iomem *addr = i2c->base + MPC_I2C_SR;
|
||||
u8 val;
|
||||
|
||||
return readb_poll_timeout(addr, val, val & mask, 0, 100);
|
||||
}
|
||||
|
||||
/*
|
||||
* Workaround for Erratum A004447. From the P2040CE Rev Q
|
||||
*
|
||||
* 1. Set up the frequency divider and sampling rate.
|
||||
* 2. I2CCR - a0h
|
||||
* 3. Poll for I2CSR[MBB] to get set.
|
||||
* 4. If I2CSR[MAL] is set (an indication that SDA is stuck low), then go to
|
||||
* step 5. If MAL is not set, then go to step 13.
|
||||
* 5. I2CCR - 00h
|
||||
* 6. I2CCR - 22h
|
||||
* 7. I2CCR - a2h
|
||||
* 8. Poll for I2CSR[MBB] to get set.
|
||||
* 9. Issue read to I2CDR.
|
||||
* 10. Poll for I2CSR[MIF] to be set.
|
||||
* 11. I2CCR - 82h
|
||||
* 12. Workaround complete. Skip the next steps.
|
||||
* 13. Issue read to I2CDR.
|
||||
* 14. Poll for I2CSR[MIF] to be set.
|
||||
* 15. I2CCR - 80h
|
||||
*/
|
||||
static void mpc_i2c_fixup_A004447(struct mpc_i2c *i2c)
|
||||
{
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
writeccr(i2c, CCR_MEN | CCR_MSTA);
|
||||
ret = i2c_mpc_wait_sr(i2c, CSR_MBB);
|
||||
if (ret) {
|
||||
dev_err(i2c->dev, "timeout waiting for CSR_MBB\n");
|
||||
return;
|
||||
}
|
||||
|
||||
val = readb(i2c->base + MPC_I2C_SR);
|
||||
|
||||
if (val & CSR_MAL) {
|
||||
writeccr(i2c, 0x00);
|
||||
writeccr(i2c, CCR_MSTA | CCR_RSVD);
|
||||
writeccr(i2c, CCR_MEN | CCR_MSTA | CCR_RSVD);
|
||||
ret = i2c_mpc_wait_sr(i2c, CSR_MBB);
|
||||
if (ret) {
|
||||
dev_err(i2c->dev, "timeout waiting for CSR_MBB\n");
|
||||
return;
|
||||
}
|
||||
val = readb(i2c->base + MPC_I2C_DR);
|
||||
ret = i2c_mpc_wait_sr(i2c, CSR_MIF);
|
||||
if (ret) {
|
||||
dev_err(i2c->dev, "timeout waiting for CSR_MIF\n");
|
||||
return;
|
||||
}
|
||||
writeccr(i2c, CCR_MEN | CCR_RSVD);
|
||||
} else {
|
||||
val = readb(i2c->base + MPC_I2C_DR);
|
||||
ret = i2c_mpc_wait_sr(i2c, CSR_MIF);
|
||||
if (ret) {
|
||||
dev_err(i2c->dev, "timeout waiting for CSR_MIF\n");
|
||||
return;
|
||||
}
|
||||
writeccr(i2c, CCR_MEN);
|
||||
}
|
||||
}
|
||||
|
||||
#if defined(CONFIG_PPC_MPC52xx) || defined(CONFIG_PPC_MPC512x)
|
||||
static const struct mpc_i2c_divider mpc_i2c_dividers_52xx[] = {
|
||||
{20, 0x20}, {22, 0x21}, {24, 0x22}, {26, 0x23},
|
||||
@@ -586,7 +658,7 @@ static int mpc_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
|
||||
if ((status & (CSR_MCF | CSR_MBB | CSR_RXAK)) != 0) {
|
||||
writeb(status & ~CSR_MAL,
|
||||
i2c->base + MPC_I2C_SR);
|
||||
mpc_i2c_fixup(i2c);
|
||||
i2c_recover_bus(&i2c->adap);
|
||||
}
|
||||
return -EIO;
|
||||
}
|
||||
@@ -622,7 +694,7 @@ static int mpc_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
|
||||
if ((status & (CSR_MCF | CSR_MBB | CSR_RXAK)) != 0) {
|
||||
writeb(status & ~CSR_MAL,
|
||||
i2c->base + MPC_I2C_SR);
|
||||
mpc_i2c_fixup(i2c);
|
||||
i2c_recover_bus(&i2c->adap);
|
||||
}
|
||||
return -EIO;
|
||||
}
|
||||
@@ -637,6 +709,18 @@ static u32 mpc_functionality(struct i2c_adapter *adap)
|
||||
| I2C_FUNC_SMBUS_READ_BLOCK_DATA | I2C_FUNC_SMBUS_BLOCK_PROC_CALL;
|
||||
}
|
||||
|
||||
static int fsl_i2c_bus_recovery(struct i2c_adapter *adap)
|
||||
{
|
||||
struct mpc_i2c *i2c = i2c_get_adapdata(adap);
|
||||
|
||||
if (i2c->has_errata_A004447)
|
||||
mpc_i2c_fixup_A004447(i2c);
|
||||
else
|
||||
mpc_i2c_fixup(i2c);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct i2c_algorithm mpc_algo = {
|
||||
.master_xfer = mpc_xfer,
|
||||
.functionality = mpc_functionality,
|
||||
@@ -648,6 +732,10 @@ static struct i2c_adapter mpc_ops = {
|
||||
.timeout = HZ,
|
||||
};
|
||||
|
||||
static struct i2c_bus_recovery_info fsl_i2c_recovery_info = {
|
||||
.recover_bus = fsl_i2c_bus_recovery,
|
||||
};
|
||||
|
||||
static const struct of_device_id mpc_i2c_of_match[];
|
||||
static int fsl_i2c_probe(struct platform_device *op)
|
||||
{
|
||||
@@ -732,6 +820,8 @@ static int fsl_i2c_probe(struct platform_device *op)
|
||||
dev_info(i2c->dev, "timeout %u us\n", mpc_ops.timeout * 1000000 / HZ);
|
||||
|
||||
platform_set_drvdata(op, i2c);
|
||||
if (of_property_read_bool(op->dev.of_node, "fsl,i2c-erratum-a004447"))
|
||||
i2c->has_errata_A004447 = true;
|
||||
|
||||
i2c->adap = mpc_ops;
|
||||
of_address_to_resource(op->dev.of_node, 0, &res);
|
||||
@@ -740,6 +830,7 @@ static int fsl_i2c_probe(struct platform_device *op)
|
||||
i2c_set_adapdata(&i2c->adap, i2c);
|
||||
i2c->adap.dev.parent = &op->dev;
|
||||
i2c->adap.dev.of_node = of_node_get(op->dev.of_node);
|
||||
i2c->adap.bus_recovery_info = &fsl_i2c_recovery_info;
|
||||
|
||||
result = i2c_add_adapter(&i2c->adap);
|
||||
if (result < 0)
|
||||
|
||||
@@ -580,12 +580,9 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
|
||||
props->cq_caps.max_cq_moderation_count = MLX4_MAX_CQ_COUNT;
|
||||
props->cq_caps.max_cq_moderation_period = MLX4_MAX_CQ_PERIOD;
|
||||
|
||||
if (!mlx4_is_slave(dev->dev))
|
||||
err = mlx4_get_internal_clock_params(dev->dev, &clock_params);
|
||||
|
||||
if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) {
|
||||
resp.response_length += sizeof(resp.hca_core_clock_offset);
|
||||
if (!err && !mlx4_is_slave(dev->dev)) {
|
||||
if (!mlx4_get_internal_clock_params(dev->dev, &clock_params)) {
|
||||
resp.comp_mask |= MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET;
|
||||
resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE;
|
||||
}
|
||||
|
||||
@@ -838,15 +838,14 @@ static void destroy_cq_user(struct mlx5_ib_cq *cq, struct ib_udata *udata)
|
||||
ib_umem_release(cq->buf.umem);
|
||||
}
|
||||
|
||||
static void init_cq_frag_buf(struct mlx5_ib_cq *cq,
|
||||
struct mlx5_ib_cq_buf *buf)
|
||||
static void init_cq_frag_buf(struct mlx5_ib_cq_buf *buf)
|
||||
{
|
||||
int i;
|
||||
void *cqe;
|
||||
struct mlx5_cqe64 *cqe64;
|
||||
|
||||
for (i = 0; i < buf->nent; i++) {
|
||||
cqe = get_cqe(cq, i);
|
||||
cqe = mlx5_frag_buf_get_wqe(&buf->fbc, i);
|
||||
cqe64 = buf->cqe_size == 64 ? cqe : cqe + 64;
|
||||
cqe64->op_own = MLX5_CQE_INVALID << 4;
|
||||
}
|
||||
@@ -872,7 +871,7 @@ static int create_cq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
|
||||
if (err)
|
||||
goto err_db;
|
||||
|
||||
init_cq_frag_buf(cq, &cq->buf);
|
||||
init_cq_frag_buf(&cq->buf);
|
||||
|
||||
*inlen = MLX5_ST_SZ_BYTES(create_cq_in) +
|
||||
MLX5_FLD_SZ_BYTES(create_cq_in, pas[0]) *
|
||||
@@ -1177,7 +1176,7 @@ static int resize_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
|
||||
if (err)
|
||||
goto ex;
|
||||
|
||||
init_cq_frag_buf(cq, cq->resize_buf);
|
||||
init_cq_frag_buf(cq->resize_buf);
|
||||
|
||||
return 0;
|
||||
|
||||
|
||||
@@ -163,6 +163,7 @@ static size_t ipoib_get_size(const struct net_device *dev)
|
||||
|
||||
static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
|
||||
.kind = "ipoib",
|
||||
.netns_refund = true,
|
||||
.maxtype = IFLA_IPOIB_MAX,
|
||||
.policy = ipoib_policy,
|
||||
.priv_size = sizeof(struct ipoib_dev_priv),
|
||||
|
||||
@@ -1100,7 +1100,6 @@ nj_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
card->typ = NETJET_S_TJ300;
|
||||
|
||||
card->base = pci_resource_start(pdev, 0);
|
||||
card->irq = pdev->irq;
|
||||
pci_set_drvdata(pdev, card);
|
||||
err = setup_instance(card);
|
||||
if (err)
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
#define DM_VERITY_VERIFY_ERR(s) DM_VERITY_ROOT_HASH_VERIFICATION " " s
|
||||
|
||||
static bool require_signatures;
|
||||
module_param(require_signatures, bool, false);
|
||||
module_param(require_signatures, bool, 0444);
|
||||
MODULE_PARM_DESC(require_signatures,
|
||||
"Verify the roothash of dm-verity hash tree");
|
||||
|
||||
|
||||
@@ -660,14 +660,19 @@ static int renesas_sdhi_execute_tuning(struct mmc_host *mmc, u32 opcode)
|
||||
|
||||
/* Issue CMD19 twice for each tap */
|
||||
for (i = 0; i < 2 * priv->tap_num; i++) {
|
||||
int cmd_error;
|
||||
|
||||
/* Set sampling clock position */
|
||||
sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num);
|
||||
|
||||
if (mmc_send_tuning(mmc, opcode, NULL) == 0)
|
||||
if (mmc_send_tuning(mmc, opcode, &cmd_error) == 0)
|
||||
set_bit(i, priv->taps);
|
||||
|
||||
if (sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_SMPCMP) == 0)
|
||||
set_bit(i, priv->smpcmp);
|
||||
|
||||
if (cmd_error)
|
||||
mmc_abort_tuning(mmc, opcode);
|
||||
}
|
||||
|
||||
ret = renesas_sdhi_select_tuning(host);
|
||||
@@ -897,7 +902,7 @@ static const struct soc_device_attribute sdhi_quirks_match[] = {
|
||||
{ .soc_id = "r8a7795", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps2367 },
|
||||
{ .soc_id = "r8a7796", .revision = "ES1.[012]", .data = &sdhi_quirks_4tap_nohs400 },
|
||||
{ .soc_id = "r8a7796", .revision = "ES1.*", .data = &sdhi_quirks_r8a7796_es13 },
|
||||
{ .soc_id = "r8a7796", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps1357 },
|
||||
{ .soc_id = "r8a77961", .data = &sdhi_quirks_bad_taps1357 },
|
||||
{ .soc_id = "r8a77965", .data = &sdhi_quirks_r8a77965 },
|
||||
{ .soc_id = "r8a77980", .data = &sdhi_quirks_nohs400 },
|
||||
{ .soc_id = "r8a77990", .data = &sdhi_quirks_r8a77990 },
|
||||
|
||||
@@ -327,6 +327,8 @@ static int __init cops_probe1(struct net_device *dev, int ioaddr)
|
||||
break;
|
||||
}
|
||||
|
||||
dev->base_addr = ioaddr;
|
||||
|
||||
/* Reserve any actual interrupt. */
|
||||
if (dev->irq) {
|
||||
retval = request_irq(dev->irq, cops_interrupt, 0, dev->name, dev);
|
||||
@@ -334,8 +336,6 @@ static int __init cops_probe1(struct net_device *dev, int ioaddr)
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
dev->base_addr = ioaddr;
|
||||
|
||||
lp = netdev_priv(dev);
|
||||
spin_lock_init(&lp->lock);
|
||||
|
||||
|
||||
@@ -1502,6 +1502,7 @@ static struct slave *bond_alloc_slave(struct bonding *bond,
|
||||
|
||||
slave->bond = bond;
|
||||
slave->dev = slave_dev;
|
||||
INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
|
||||
|
||||
if (bond_kobj_init(slave))
|
||||
return NULL;
|
||||
@@ -1514,7 +1515,6 @@ static struct slave *bond_alloc_slave(struct bonding *bond,
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
|
||||
|
||||
return slave;
|
||||
}
|
||||
|
||||
@@ -1532,6 +1532,7 @@ static const struct ksz_chip_data ksz9477_switch_chips[] = {
|
||||
.num_statics = 16,
|
||||
.cpu_ports = 0x7F, /* can be configured as cpu port */
|
||||
.port_cnt = 7, /* total physical port count */
|
||||
.phy_errata_9477 = true,
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
@@ -1224,8 +1224,10 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
|
||||
goto failed;
|
||||
|
||||
/* SR-IOV capability was enabled but there are no VFs*/
|
||||
if (iov->total == 0)
|
||||
if (iov->total == 0) {
|
||||
err = -EINVAL;
|
||||
goto failed;
|
||||
}
|
||||
|
||||
iov->nr_virtfn = min_t(u16, iov->total, num_vfs_param);
|
||||
|
||||
|
||||
@@ -2709,6 +2709,9 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
|
||||
struct gem_stats *hwstat = &bp->hw_stats.gem;
|
||||
struct net_device_stats *nstat = &bp->dev->stats;
|
||||
|
||||
if (!netif_running(bp->dev))
|
||||
return nstat;
|
||||
|
||||
gem_update_stats(bp);
|
||||
|
||||
nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
|
||||
|
||||
@@ -823,6 +823,7 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
|
||||
#define QUERY_DEV_CAP_MAD_DEMUX_OFFSET 0xb0
|
||||
#define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_BASE_OFFSET 0xa8
|
||||
#define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_RANGE_OFFSET 0xac
|
||||
#define QUERY_DEV_CAP_MAP_CLOCK_TO_USER 0xc1
|
||||
#define QUERY_DEV_CAP_QP_RATE_LIMIT_NUM_OFFSET 0xcc
|
||||
#define QUERY_DEV_CAP_QP_RATE_LIMIT_MAX_OFFSET 0xd0
|
||||
#define QUERY_DEV_CAP_QP_RATE_LIMIT_MIN_OFFSET 0xd2
|
||||
@@ -841,6 +842,8 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
|
||||
|
||||
if (mlx4_is_mfunc(dev))
|
||||
disable_unsupported_roce_caps(outbox);
|
||||
MLX4_GET(field, outbox, QUERY_DEV_CAP_MAP_CLOCK_TO_USER);
|
||||
dev_cap->map_clock_to_user = field & 0x80;
|
||||
MLX4_GET(field, outbox, QUERY_DEV_CAP_RSVD_QP_OFFSET);
|
||||
dev_cap->reserved_qps = 1 << (field & 0xf);
|
||||
MLX4_GET(field, outbox, QUERY_DEV_CAP_MAX_QP_OFFSET);
|
||||
|
||||
@@ -131,6 +131,7 @@ struct mlx4_dev_cap {
|
||||
u32 health_buffer_addrs;
|
||||
struct mlx4_port_cap port_cap[MLX4_MAX_PORTS + 1];
|
||||
bool wol_port[MLX4_MAX_PORTS + 1];
|
||||
bool map_clock_to_user;
|
||||
};
|
||||
|
||||
struct mlx4_func_cap {
|
||||
|
||||
@@ -498,6 +498,7 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
|
||||
}
|
||||
}
|
||||
|
||||
dev->caps.map_clock_to_user = dev_cap->map_clock_to_user;
|
||||
dev->caps.uar_page_size = PAGE_SIZE;
|
||||
dev->caps.num_uars = dev_cap->uar_size / PAGE_SIZE;
|
||||
dev->caps.local_ca_ack_delay = dev_cap->local_ca_ack_delay;
|
||||
@@ -1948,6 +1949,11 @@ int mlx4_get_internal_clock_params(struct mlx4_dev *dev,
|
||||
if (mlx4_is_slave(dev))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!dev->caps.map_clock_to_user) {
|
||||
mlx4_dbg(dev, "Map clock to user is not supported.\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (!params)
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
@@ -114,7 +114,7 @@ static int ql_sem_spinlock(struct ql3_adapter *qdev,
|
||||
value = readl(&port_regs->CommonRegs.semaphoreReg);
|
||||
if ((value & (sem_mask >> 16)) == sem_bits)
|
||||
return 0;
|
||||
ssleep(1);
|
||||
mdelay(1000);
|
||||
} while (--seconds);
|
||||
return -1;
|
||||
}
|
||||
|
||||
@@ -90,6 +90,7 @@ int efx_nic_init_interrupt(struct efx_nic *efx)
|
||||
efx->pci_dev->irq);
|
||||
goto fail1;
|
||||
}
|
||||
efx->irqs_hooked = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
|
||||
struct mdio_device *mdiodev;
|
||||
int i;
|
||||
|
||||
BUG_ON(bus->state != MDIOBUS_REGISTERED);
|
||||
if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
|
||||
return;
|
||||
bus->state = MDIOBUS_UNREGISTERED;
|
||||
|
||||
for (i = 0; i < PHY_MAX_ADDR; i++) {
|
||||
|
||||
@@ -71,7 +71,8 @@ config NVME_FC
|
||||
config NVME_TCP
|
||||
tristate "NVM Express over Fabrics TCP host driver"
|
||||
depends on INET
|
||||
depends on BLK_DEV_NVME
|
||||
depends on BLOCK
|
||||
select NVME_CORE
|
||||
select NVME_FABRICS
|
||||
select CRYPTO
|
||||
select CRYPTO_CRC32C
|
||||
|
||||
@@ -336,6 +336,11 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl,
|
||||
cmd->connect.recfmt);
|
||||
break;
|
||||
|
||||
case NVME_SC_HOST_PATH_ERROR:
|
||||
dev_err(ctrl->device,
|
||||
"Connect command failed: host path error\n");
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(ctrl->device,
|
||||
"Connect command failed, error wo/DNR bit: %d\n",
|
||||
|
||||
@@ -379,10 +379,10 @@ static void nvmet_keep_alive_timer(struct work_struct *work)
|
||||
{
|
||||
struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
|
||||
struct nvmet_ctrl, ka_work);
|
||||
bool cmd_seen = ctrl->cmd_seen;
|
||||
bool reset_tbkas = ctrl->reset_tbkas;
|
||||
|
||||
ctrl->cmd_seen = false;
|
||||
if (cmd_seen) {
|
||||
ctrl->reset_tbkas = false;
|
||||
if (reset_tbkas) {
|
||||
pr_debug("ctrl %d reschedule traffic based keep-alive timer\n",
|
||||
ctrl->cntlid);
|
||||
schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
|
||||
@@ -792,6 +792,13 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
|
||||
percpu_ref_exit(&sq->ref);
|
||||
|
||||
if (ctrl) {
|
||||
/*
|
||||
* The teardown flow may take some time, and the host may not
|
||||
* send us keep-alive during this period, hence reset the
|
||||
* traffic based keep-alive timer so we don't trigger a
|
||||
* controller teardown as a result of a keep-alive expiration.
|
||||
*/
|
||||
ctrl->reset_tbkas = true;
|
||||
nvmet_ctrl_put(ctrl);
|
||||
sq->ctrl = NULL; /* allows reusing the queue later */
|
||||
}
|
||||
@@ -942,7 +949,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
|
||||
}
|
||||
|
||||
if (sq->ctrl)
|
||||
sq->ctrl->cmd_seen = true;
|
||||
sq->ctrl->reset_tbkas = true;
|
||||
|
||||
return true;
|
||||
|
||||
|
||||
@@ -166,7 +166,7 @@ struct nvmet_ctrl {
|
||||
struct nvmet_subsys *subsys;
|
||||
struct nvmet_sq **sqs;
|
||||
|
||||
bool cmd_seen;
|
||||
bool reset_tbkas;
|
||||
|
||||
struct mutex lock;
|
||||
u64 cap;
|
||||
|
||||
@@ -78,7 +78,7 @@ static inline u32 brcm_usb_readl(void __iomem *addr)
|
||||
* Other architectures (e.g., ARM) either do not support big endian, or
|
||||
* else leave I/O in little endian mode.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
|
||||
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
|
||||
return __raw_readl(addr);
|
||||
else
|
||||
return readl_relaxed(addr);
|
||||
@@ -87,7 +87,7 @@ static inline u32 brcm_usb_readl(void __iomem *addr)
|
||||
static inline void brcm_usb_writel(u32 val, void __iomem *addr)
|
||||
{
|
||||
/* See brcmnand_readl() comments */
|
||||
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))
|
||||
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
|
||||
__raw_writel(val, addr);
|
||||
else
|
||||
writel_relaxed(val, addr);
|
||||
|
||||
@@ -614,6 +614,7 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
|
||||
sp->nsubnodes = node;
|
||||
|
||||
if (sp->num_lanes > SIERRA_MAX_LANES) {
|
||||
ret = -EINVAL;
|
||||
dev_err(dev, "Invalid lane configuration\n");
|
||||
goto put_child2;
|
||||
}
|
||||
|
||||
@@ -894,6 +894,7 @@ static int wiz_probe(struct platform_device *pdev)
|
||||
|
||||
if (wiz->typec_dir_delay < WIZ_TYPEC_DIR_DEBOUNCE_MIN ||
|
||||
wiz->typec_dir_delay > WIZ_TYPEC_DIR_DEBOUNCE_MAX) {
|
||||
ret = -EINVAL;
|
||||
dev_err(dev, "Invalid typec-dir-debounce property\n");
|
||||
goto err_addr_to_resource;
|
||||
}
|
||||
|
||||
@@ -364,7 +364,7 @@ BD718XX_OPS(bd71837_buck_regulator_ops, regulator_list_voltage_linear_range,
|
||||
NULL);
|
||||
|
||||
BD718XX_OPS(bd71837_buck_regulator_nolinear_ops, regulator_list_voltage_table,
|
||||
regulator_map_voltage_ascend, bd718xx_set_voltage_sel_restricted,
|
||||
regulator_map_voltage_ascend, bd71837_set_voltage_sel_restricted,
|
||||
regulator_get_voltage_sel_regmap, regulator_set_voltage_time_sel,
|
||||
NULL);
|
||||
/*
|
||||
|
||||
@@ -1422,6 +1422,12 @@ static int set_machine_constraints(struct regulator_dev *rdev)
|
||||
* and we have control then make sure it is enabled.
|
||||
*/
|
||||
if (rdev->constraints->always_on || rdev->constraints->boot_on) {
|
||||
/* If we want to enable this regulator, make sure that we know
|
||||
* the supplying regulator.
|
||||
*/
|
||||
if (rdev->supply_name && !rdev->supply)
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
if (rdev->supply) {
|
||||
ret = regulator_enable(rdev->supply);
|
||||
if (ret < 0) {
|
||||
|
||||
@@ -51,6 +51,7 @@ static const struct regulator_ops fan53880_ops = {
|
||||
REGULATOR_LINEAR_RANGE(800000, 0xf, 0x73, 25000), \
|
||||
}, \
|
||||
.n_linear_ranges = 2, \
|
||||
.n_voltages = 0x74, \
|
||||
.vsel_reg = FAN53880_LDO ## _num ## VOUT, \
|
||||
.vsel_mask = 0x7f, \
|
||||
.enable_reg = FAN53880_ENABLE, \
|
||||
@@ -76,6 +77,7 @@ static const struct regulator_desc fan53880_regulators[] = {
|
||||
REGULATOR_LINEAR_RANGE(600000, 0x1f, 0xf7, 12500),
|
||||
},
|
||||
.n_linear_ranges = 2,
|
||||
.n_voltages = 0xf8,
|
||||
.vsel_reg = FAN53880_BUCKVOUT,
|
||||
.vsel_mask = 0x7f,
|
||||
.enable_reg = FAN53880_ENABLE,
|
||||
@@ -95,6 +97,7 @@ static const struct regulator_desc fan53880_regulators[] = {
|
||||
REGULATOR_LINEAR_RANGE(3000000, 0x4, 0x70, 25000),
|
||||
},
|
||||
.n_linear_ranges = 2,
|
||||
.n_voltages = 0x71,
|
||||
.vsel_reg = FAN53880_BOOSTVOUT,
|
||||
.vsel_mask = 0x7f,
|
||||
.enable_reg = FAN53880_ENABLE_BOOST,
|
||||
|
||||
@@ -814,6 +814,13 @@ static int max77620_regulator_probe(struct platform_device *pdev)
|
||||
config.dev = dev;
|
||||
config.driver_data = pmic;
|
||||
|
||||
/*
|
||||
* Set of_node_reuse flag to prevent driver core from attempting to
|
||||
* claim any pinmux resources already claimed by the parent device.
|
||||
* Otherwise PMIC driver will fail to re-probe.
|
||||
*/
|
||||
device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);
|
||||
|
||||
for (id = 0; id < MAX77620_NUM_REGS; id++) {
|
||||
struct regulator_dev *rdev;
|
||||
struct regulator_desc *rdesc;
|
||||
|
||||
@@ -103,9 +103,47 @@ static int rtmv20_lsw_disable(struct regulator_dev *rdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rtmv20_lsw_set_current_limit(struct regulator_dev *rdev, int min_uA,
|
||||
int max_uA)
|
||||
{
|
||||
int sel;
|
||||
|
||||
if (min_uA > RTMV20_LSW_MAXUA || max_uA < RTMV20_LSW_MINUA)
|
||||
return -EINVAL;
|
||||
|
||||
if (max_uA > RTMV20_LSW_MAXUA)
|
||||
max_uA = RTMV20_LSW_MAXUA;
|
||||
|
||||
sel = (max_uA - RTMV20_LSW_MINUA) / RTMV20_LSW_STEPUA;
|
||||
|
||||
/* Ensure the selected setting is still in range */
|
||||
if ((sel * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA) < min_uA)
|
||||
return -EINVAL;
|
||||
|
||||
sel <<= ffs(rdev->desc->csel_mask) - 1;
|
||||
|
||||
return regmap_update_bits(rdev->regmap, rdev->desc->csel_reg,
|
||||
rdev->desc->csel_mask, sel);
|
||||
}
|
||||
|
||||
static int rtmv20_lsw_get_current_limit(struct regulator_dev *rdev)
|
||||
{
|
||||
unsigned int val;
|
||||
int ret;
|
||||
|
||||
ret = regmap_read(rdev->regmap, rdev->desc->csel_reg, &val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val &= rdev->desc->csel_mask;
|
||||
val >>= ffs(rdev->desc->csel_mask) - 1;
|
||||
|
||||
return val * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA;
|
||||
}
|
||||
|
||||
static const struct regulator_ops rtmv20_regulator_ops = {
|
||||
.set_current_limit = regulator_set_current_limit_regmap,
|
||||
.get_current_limit = regulator_get_current_limit_regmap,
|
||||
.set_current_limit = rtmv20_lsw_set_current_limit,
|
||||
.get_current_limit = rtmv20_lsw_get_current_limit,
|
||||
.enable = rtmv20_lsw_enable,
|
||||
.disable = rtmv20_lsw_disable,
|
||||
.is_enabled = regulator_is_enabled_regmap,
|
||||
|
||||
@@ -86,6 +86,7 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
|
||||
struct vfio_ccw_private *private;
|
||||
struct irb *irb;
|
||||
bool is_final;
|
||||
bool cp_is_finished = false;
|
||||
|
||||
private = container_of(work, struct vfio_ccw_private, io_work);
|
||||
irb = &private->irb;
|
||||
@@ -94,14 +95,21 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
|
||||
(SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
|
||||
if (scsw_is_solicited(&irb->scsw)) {
|
||||
cp_update_scsw(&private->cp, &irb->scsw);
|
||||
if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING)
|
||||
if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING) {
|
||||
cp_free(&private->cp);
|
||||
cp_is_finished = true;
|
||||
}
|
||||
}
|
||||
mutex_lock(&private->io_mutex);
|
||||
memcpy(private->io_region->irb_area, irb, sizeof(*irb));
|
||||
mutex_unlock(&private->io_mutex);
|
||||
|
||||
if (private->mdev && is_final)
|
||||
/*
|
||||
* Reset to IDLE only if processing of a channel program
|
||||
* has finished. Do not overwrite a possible processing
|
||||
* state if the final interrupt was for HSCH or CSCH.
|
||||
*/
|
||||
if (private->mdev && cp_is_finished)
|
||||
private->state = VFIO_CCW_STATE_IDLE;
|
||||
|
||||
if (private->io_trigger)
|
||||
|
||||
@@ -318,6 +318,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
|
||||
}
|
||||
|
||||
err_out:
|
||||
private->state = VFIO_CCW_STATE_IDLE;
|
||||
trace_vfio_ccw_fsm_io_request(scsw->cmd.fctl, schid,
|
||||
io_region->ret_code, errstr);
|
||||
}
|
||||
|
||||
@@ -276,8 +276,6 @@ static ssize_t vfio_ccw_mdev_write_io_region(struct vfio_ccw_private *private,
|
||||
}
|
||||
|
||||
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_IO_REQ);
|
||||
if (region->ret_code != 0)
|
||||
private->state = VFIO_CCW_STATE_IDLE;
|
||||
ret = (region->ret_code != 0) ? region->ret_code : count;
|
||||
|
||||
out_unlock:
|
||||
|
||||
@@ -1220,6 +1220,7 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
|
||||
was a result from the ABTS request rather than the CLEANUP
|
||||
request */
|
||||
set_bit(BNX2FC_FLAG_IO_CLEANUP, &io_req->req_flags);
|
||||
rc = FAILED;
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
||||
@@ -3359,14 +3359,14 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int i;
|
||||
|
||||
free_irq(pci_irq_vector(pdev, 1), hisi_hba);
|
||||
free_irq(pci_irq_vector(pdev, 2), hisi_hba);
|
||||
free_irq(pci_irq_vector(pdev, 11), hisi_hba);
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 1), hisi_hba);
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 2), hisi_hba);
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 11), hisi_hba);
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
|
||||
|
||||
free_irq(pci_irq_vector(pdev, nr), cq);
|
||||
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
|
||||
}
|
||||
pci_free_irq_vectors(pdev);
|
||||
}
|
||||
|
||||
@@ -254,12 +254,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
|
||||
|
||||
device_enable_async_suspend(&shost->shost_dev);
|
||||
|
||||
get_device(&shost->shost_gendev);
|
||||
error = device_add(&shost->shost_dev);
|
||||
if (error)
|
||||
goto out_del_gendev;
|
||||
|
||||
get_device(&shost->shost_gendev);
|
||||
|
||||
if (shost->transportt->host_size) {
|
||||
shost->shost_data = kzalloc(shost->transportt->host_size,
|
||||
GFP_KERNEL);
|
||||
@@ -278,33 +277,36 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
|
||||
|
||||
if (!shost->work_q) {
|
||||
error = -EINVAL;
|
||||
goto out_free_shost_data;
|
||||
goto out_del_dev;
|
||||
}
|
||||
}
|
||||
|
||||
error = scsi_sysfs_add_host(shost);
|
||||
if (error)
|
||||
goto out_destroy_host;
|
||||
goto out_del_dev;
|
||||
|
||||
scsi_proc_host_add(shost);
|
||||
scsi_autopm_put_host(shost);
|
||||
return error;
|
||||
|
||||
out_destroy_host:
|
||||
if (shost->work_q)
|
||||
destroy_workqueue(shost->work_q);
|
||||
out_free_shost_data:
|
||||
kfree(shost->shost_data);
|
||||
/*
|
||||
* Any host allocation in this function will be freed in
|
||||
* scsi_host_dev_release().
|
||||
*/
|
||||
out_del_dev:
|
||||
device_del(&shost->shost_dev);
|
||||
out_del_gendev:
|
||||
/*
|
||||
* Host state is SHOST_RUNNING so we have to explicitly release
|
||||
* ->shost_dev.
|
||||
*/
|
||||
put_device(&shost->shost_dev);
|
||||
device_del(&shost->shost_gendev);
|
||||
out_disable_runtime_pm:
|
||||
device_disable_async_suspend(&shost->shost_gendev);
|
||||
pm_runtime_disable(&shost->shost_gendev);
|
||||
pm_runtime_set_suspended(&shost->shost_gendev);
|
||||
pm_runtime_put_noidle(&shost->shost_gendev);
|
||||
scsi_mq_destroy_tags(shost);
|
||||
fail:
|
||||
return error;
|
||||
}
|
||||
@@ -345,7 +347,7 @@ static void scsi_host_dev_release(struct device *dev)
|
||||
|
||||
ida_simple_remove(&host_index_ida, shost->host_no);
|
||||
|
||||
if (parent)
|
||||
if (shost->shost_state != SHOST_CREATED)
|
||||
put_device(parent);
|
||||
kfree(shost);
|
||||
}
|
||||
@@ -392,8 +394,10 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
|
||||
mutex_init(&shost->scan_mutex);
|
||||
|
||||
index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL);
|
||||
if (index < 0)
|
||||
goto fail_kfree;
|
||||
if (index < 0) {
|
||||
kfree(shost);
|
||||
return NULL;
|
||||
}
|
||||
shost->host_no = index;
|
||||
|
||||
shost->dma_channel = 0xff;
|
||||
@@ -486,7 +490,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
|
||||
shost_printk(KERN_WARNING, shost,
|
||||
"error handler thread failed to spawn, error = %ld\n",
|
||||
PTR_ERR(shost->ehandler));
|
||||
goto fail_index_remove;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
shost->tmf_work_q = alloc_workqueue("scsi_tmf_%d",
|
||||
@@ -495,17 +499,18 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
|
||||
if (!shost->tmf_work_q) {
|
||||
shost_printk(KERN_WARNING, shost,
|
||||
"failed to create tmf workq\n");
|
||||
goto fail_kthread;
|
||||
goto fail;
|
||||
}
|
||||
scsi_proc_hostdir_add(shost->hostt);
|
||||
return shost;
|
||||
fail:
|
||||
/*
|
||||
* Host state is still SHOST_CREATED and that is enough to release
|
||||
* ->shost_gendev. scsi_host_dev_release() will free
|
||||
* dev_name(&shost->shost_dev).
|
||||
*/
|
||||
put_device(&shost->shost_gendev);
|
||||
|
||||
fail_kthread:
|
||||
kthread_stop(shost->ehandler);
|
||||
fail_index_remove:
|
||||
ida_simple_remove(&host_index_ida, shost->host_no);
|
||||
fail_kfree:
|
||||
kfree(shost);
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(scsi_host_alloc);
|
||||
|
||||
@@ -1559,10 +1559,12 @@ void qlt_stop_phase2(struct qla_tgt *tgt)
|
||||
return;
|
||||
}
|
||||
|
||||
mutex_lock(&tgt->ha->optrom_mutex);
|
||||
mutex_lock(&vha->vha_tgt.tgt_mutex);
|
||||
tgt->tgt_stop = 0;
|
||||
tgt->tgt_stopped = 1;
|
||||
mutex_unlock(&vha->vha_tgt.tgt_mutex);
|
||||
mutex_unlock(&tgt->ha->optrom_mutex);
|
||||
|
||||
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf00c, "Stop of tgt %p finished\n",
|
||||
tgt);
|
||||
|
||||
@@ -587,7 +587,13 @@ static void pvscsi_complete_request(struct pvscsi_adapter *adapter,
|
||||
case BTSTAT_SUCCESS:
|
||||
case BTSTAT_LINKED_COMMAND_COMPLETED:
|
||||
case BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG:
|
||||
/* If everything went fine, let's move on.. */
|
||||
/*
|
||||
* Commands like INQUIRY may transfer less data than
|
||||
* requested by the initiator via bufflen. Set residual
|
||||
* count to make upper layer aware of the actual amount
|
||||
* of data returned.
|
||||
*/
|
||||
scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);
|
||||
cmd->result = (DID_OK << 16);
|
||||
break;
|
||||
|
||||
|
||||
@@ -68,7 +68,7 @@
|
||||
#define BCM2835_SPI_FIFO_SIZE 64
|
||||
#define BCM2835_SPI_FIFO_SIZE_3_4 48
|
||||
#define BCM2835_SPI_DMA_MIN_LENGTH 96
|
||||
#define BCM2835_SPI_NUM_CS 4 /* raise as necessary */
|
||||
#define BCM2835_SPI_NUM_CS 24 /* raise as necessary */
|
||||
#define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \
|
||||
| SPI_NO_CS | SPI_3WIRE)
|
||||
|
||||
@@ -1195,6 +1195,12 @@ static int bcm2835_spi_setup(struct spi_device *spi)
|
||||
struct gpio_chip *chip;
|
||||
u32 cs;
|
||||
|
||||
if (spi->chip_select >= BCM2835_SPI_NUM_CS) {
|
||||
dev_err(&spi->dev, "only %d chip-selects supported\n",
|
||||
BCM2835_SPI_NUM_CS - 1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Precalculate SPI slave's CS register value for ->prepare_message():
|
||||
* The driver always uses software-controlled GPIO chip select, hence
|
||||
@@ -1288,7 +1294,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
|
||||
ctlr->use_gpio_descriptors = true;
|
||||
ctlr->mode_bits = BCM2835_SPI_MODE_BITS;
|
||||
ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
|
||||
ctlr->num_chipselect = BCM2835_SPI_NUM_CS;
|
||||
ctlr->num_chipselect = 3;
|
||||
ctlr->setup = bcm2835_spi_setup;
|
||||
ctlr->transfer_one = bcm2835_spi_transfer_one;
|
||||
ctlr->handle_err = bcm2835_spi_handle_err;
|
||||
|
||||
@@ -181,6 +181,8 @@ int spi_bitbang_setup(struct spi_device *spi)
|
||||
{
|
||||
struct spi_bitbang_cs *cs = spi->controller_state;
|
||||
struct spi_bitbang *bitbang;
|
||||
bool initial_setup = false;
|
||||
int retval;
|
||||
|
||||
bitbang = spi_master_get_devdata(spi->master);
|
||||
|
||||
@@ -189,22 +191,30 @@ int spi_bitbang_setup(struct spi_device *spi)
|
||||
if (!cs)
|
||||
return -ENOMEM;
|
||||
spi->controller_state = cs;
|
||||
initial_setup = true;
|
||||
}
|
||||
|
||||
/* per-word shift register access, in hardware or bitbanging */
|
||||
cs->txrx_word = bitbang->txrx_word[spi->mode & (SPI_CPOL|SPI_CPHA)];
|
||||
if (!cs->txrx_word)
|
||||
return -EINVAL;
|
||||
if (!cs->txrx_word) {
|
||||
retval = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
if (bitbang->setup_transfer) {
|
||||
int retval = bitbang->setup_transfer(spi, NULL);
|
||||
retval = bitbang->setup_transfer(spi, NULL);
|
||||
if (retval < 0)
|
||||
return retval;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs);
|
||||
|
||||
return 0;
|
||||
|
||||
err_free:
|
||||
if (initial_setup)
|
||||
kfree(cs);
|
||||
return retval;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(spi_bitbang_setup);
|
||||
|
||||
|
||||
@@ -440,6 +440,7 @@ static int fsl_spi_setup(struct spi_device *spi)
|
||||
{
|
||||
struct mpc8xxx_spi *mpc8xxx_spi;
|
||||
struct fsl_spi_reg __iomem *reg_base;
|
||||
bool initial_setup = false;
|
||||
int retval;
|
||||
u32 hw_mode;
|
||||
struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
|
||||
@@ -452,6 +453,7 @@ static int fsl_spi_setup(struct spi_device *spi)
|
||||
if (!cs)
|
||||
return -ENOMEM;
|
||||
spi_set_ctldata(spi, cs);
|
||||
initial_setup = true;
|
||||
}
|
||||
mpc8xxx_spi = spi_master_get_devdata(spi->master);
|
||||
|
||||
@@ -475,6 +477,8 @@ static int fsl_spi_setup(struct spi_device *spi)
|
||||
retval = fsl_spi_setup_transfer(spi, NULL);
|
||||
if (retval < 0) {
|
||||
cs->hw_mode = hw_mode; /* Restore settings */
|
||||
if (initial_setup)
|
||||
kfree(cs);
|
||||
return retval;
|
||||
}
|
||||
|
||||
|
||||
@@ -424,15 +424,22 @@ done:
|
||||
static int uwire_setup(struct spi_device *spi)
|
||||
{
|
||||
struct uwire_state *ust = spi->controller_state;
|
||||
bool initial_setup = false;
|
||||
int status;
|
||||
|
||||
if (ust == NULL) {
|
||||
ust = kzalloc(sizeof(*ust), GFP_KERNEL);
|
||||
if (ust == NULL)
|
||||
return -ENOMEM;
|
||||
spi->controller_state = ust;
|
||||
initial_setup = true;
|
||||
}
|
||||
|
||||
return uwire_setup_transfer(spi, NULL);
|
||||
status = uwire_setup_transfer(spi, NULL);
|
||||
if (status && initial_setup)
|
||||
kfree(ust);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
static void uwire_cleanup(struct spi_device *spi)
|
||||
|
||||
@@ -1032,8 +1032,22 @@ static void omap2_mcspi_release_dma(struct spi_master *master)
|
||||
}
|
||||
}
|
||||
|
||||
static void omap2_mcspi_cleanup(struct spi_device *spi)
|
||||
{
|
||||
struct omap2_mcspi_cs *cs;
|
||||
|
||||
if (spi->controller_state) {
|
||||
/* Unlink controller state from context save list */
|
||||
cs = spi->controller_state;
|
||||
list_del(&cs->node);
|
||||
|
||||
kfree(cs);
|
||||
}
|
||||
}
|
||||
|
||||
static int omap2_mcspi_setup(struct spi_device *spi)
|
||||
{
|
||||
bool initial_setup = false;
|
||||
int ret;
|
||||
struct omap2_mcspi *mcspi = spi_master_get_devdata(spi->master);
|
||||
struct omap2_mcspi_regs *ctx = &mcspi->ctx;
|
||||
@@ -1051,35 +1065,28 @@ static int omap2_mcspi_setup(struct spi_device *spi)
|
||||
spi->controller_state = cs;
|
||||
/* Link this to context save list */
|
||||
list_add_tail(&cs->node, &ctx->cs);
|
||||
initial_setup = true;
|
||||
}
|
||||
|
||||
ret = pm_runtime_get_sync(mcspi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(mcspi->dev);
|
||||
if (initial_setup)
|
||||
omap2_mcspi_cleanup(spi);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = omap2_mcspi_setup_transfer(spi, NULL);
|
||||
if (ret && initial_setup)
|
||||
omap2_mcspi_cleanup(spi);
|
||||
|
||||
pm_runtime_mark_last_busy(mcspi->dev);
|
||||
pm_runtime_put_autosuspend(mcspi->dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void omap2_mcspi_cleanup(struct spi_device *spi)
|
||||
{
|
||||
struct omap2_mcspi_cs *cs;
|
||||
|
||||
if (spi->controller_state) {
|
||||
/* Unlink controller state from context save list */
|
||||
cs = spi->controller_state;
|
||||
list_del(&cs->node);
|
||||
|
||||
kfree(cs);
|
||||
}
|
||||
}
|
||||
|
||||
static irqreturn_t omap2_mcspi_irq_handler(int irq, void *data)
|
||||
{
|
||||
struct omap2_mcspi *mcspi = data;
|
||||
|
||||
@@ -1254,6 +1254,8 @@ static int setup_cs(struct spi_device *spi, struct chip_data *chip,
|
||||
chip->gpio_cs_inverted = spi->mode & SPI_CS_HIGH;
|
||||
|
||||
err = gpiod_direction_output(gpiod, !chip->gpio_cs_inverted);
|
||||
if (err)
|
||||
gpiod_put(chip->gpiod_cs);
|
||||
}
|
||||
|
||||
return err;
|
||||
@@ -1267,6 +1269,7 @@ static int setup(struct spi_device *spi)
|
||||
struct driver_data *drv_data =
|
||||
spi_controller_get_devdata(spi->controller);
|
||||
uint tx_thres, tx_hi_thres, rx_thres;
|
||||
int err;
|
||||
|
||||
switch (drv_data->ssp_type) {
|
||||
case QUARK_X1000_SSP:
|
||||
@@ -1413,7 +1416,11 @@ static int setup(struct spi_device *spi)
|
||||
if (drv_data->ssp_type == CE4100_SSP)
|
||||
return 0;
|
||||
|
||||
return setup_cs(spi, chip, chip_info);
|
||||
err = setup_cs(spi, chip, chip_info);
|
||||
if (err)
|
||||
kfree(chip);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void cleanup(struct spi_device *spi)
|
||||
|
||||
@@ -1068,6 +1068,7 @@ static const struct of_device_id sprd_spi_of_match[] = {
|
||||
{ .compatible = "sprd,sc9860-spi", },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sprd_spi_of_match);
|
||||
|
||||
static struct platform_driver sprd_spi_driver = {
|
||||
.driver = {
|
||||
|
||||
@@ -528,18 +528,17 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
|
||||
struct zynq_qspi *xqspi = spi_controller_get_devdata(mem->spi->master);
|
||||
int err = 0, i;
|
||||
u8 *tmpbuf;
|
||||
u8 opcode = op->cmd.opcode;
|
||||
|
||||
dev_dbg(xqspi->dev, "cmd:%#x mode:%d.%d.%d.%d\n",
|
||||
opcode, op->cmd.buswidth, op->addr.buswidth,
|
||||
op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth,
|
||||
op->dummy.buswidth, op->data.buswidth);
|
||||
|
||||
zynq_qspi_chipselect(mem->spi, true);
|
||||
zynq_qspi_config_op(xqspi, mem->spi);
|
||||
|
||||
if (op->cmd.nbytes) {
|
||||
if (op->cmd.opcode) {
|
||||
reinit_completion(&xqspi->data_completion);
|
||||
xqspi->txbuf = &opcode;
|
||||
xqspi->txbuf = (u8 *)&op->cmd.opcode;
|
||||
xqspi->rxbuf = NULL;
|
||||
xqspi->tx_bytes = op->cmd.nbytes;
|
||||
xqspi->rx_bytes = op->cmd.nbytes;
|
||||
|
||||
@@ -47,10 +47,6 @@ static void spidev_release(struct device *dev)
|
||||
{
|
||||
struct spi_device *spi = to_spi_device(dev);
|
||||
|
||||
/* spi controllers may cleanup for released devices */
|
||||
if (spi->controller->cleanup)
|
||||
spi->controller->cleanup(spi);
|
||||
|
||||
spi_controller_put(spi->controller);
|
||||
kfree(spi->driver_override);
|
||||
kfree(spi);
|
||||
@@ -550,6 +546,12 @@ static int spi_dev_check(struct device *dev, void *data)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void spi_cleanup(struct spi_device *spi)
|
||||
{
|
||||
if (spi->controller->cleanup)
|
||||
spi->controller->cleanup(spi);
|
||||
}
|
||||
|
||||
/**
|
||||
* spi_add_device - Add spi_device allocated with spi_alloc_device
|
||||
* @spi: spi_device to register
|
||||
@@ -614,11 +616,13 @@ int spi_add_device(struct spi_device *spi)
|
||||
|
||||
/* Device may be bound to an active driver when this returns */
|
||||
status = device_add(&spi->dev);
|
||||
if (status < 0)
|
||||
if (status < 0) {
|
||||
dev_err(dev, "can't add %s, status %d\n",
|
||||
dev_name(&spi->dev), status);
|
||||
else
|
||||
spi_cleanup(spi);
|
||||
} else {
|
||||
dev_dbg(dev, "registered child %s\n", dev_name(&spi->dev));
|
||||
}
|
||||
|
||||
done:
|
||||
mutex_unlock(&spi_add_lock);
|
||||
@@ -711,7 +715,9 @@ void spi_unregister_device(struct spi_device *spi)
|
||||
}
|
||||
if (ACPI_COMPANION(&spi->dev))
|
||||
acpi_device_clear_enumerated(ACPI_COMPANION(&spi->dev));
|
||||
device_unregister(&spi->dev);
|
||||
device_del(&spi->dev);
|
||||
spi_cleanup(spi);
|
||||
put_device(&spi->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(spi_unregister_device);
|
||||
|
||||
|
||||
@@ -2384,7 +2384,7 @@ void rtw_cfg80211_indicate_sta_assoc(struct adapter *padapter, u8 *pmgmt_frame,
|
||||
DBG_871X(FUNC_ADPT_FMT"\n", FUNC_ADPT_ARG(padapter));
|
||||
|
||||
{
|
||||
struct station_info sinfo;
|
||||
struct station_info sinfo = {};
|
||||
u8 ie_offset;
|
||||
if (GetFrameSubType(pmgmt_frame) == WIFI_ASSOCREQ)
|
||||
ie_offset = _ASOCREQ_IE_OFFSET_;
|
||||
|
||||
@@ -3255,8 +3255,10 @@ static int __cdns3_gadget_init(struct cdns3 *cdns)
|
||||
pm_runtime_get_sync(cdns->dev);
|
||||
|
||||
ret = cdns3_gadget_start(cdns);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
pm_runtime_put_sync(cdns->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Because interrupt line can be shared with other components in
|
||||
|
||||
@@ -2055,6 +2055,7 @@ static int udc_start(struct ci_hdrc *ci)
|
||||
ci->gadget.name = ci->platdata->name;
|
||||
ci->gadget.otg_caps = otg_caps;
|
||||
ci->gadget.sg_supported = 1;
|
||||
ci->gadget.irq = ci->irq;
|
||||
|
||||
if (ci->platdata->flags & CI_HDRC_REQUIRES_ALIGNED_DMA)
|
||||
ci->gadget.quirk_avoids_skb_reserve = 1;
|
||||
|
||||
@@ -651,7 +651,7 @@ static int dwc3_meson_g12a_setup_regmaps(struct dwc3_meson_g12a *priv,
|
||||
return PTR_ERR(priv->usb_glue_regmap);
|
||||
|
||||
/* Create a regmap for each USB2 PHY control register set */
|
||||
for (i = 0; i < priv->usb2_ports; i++) {
|
||||
for (i = 0; i < priv->drvdata->num_phys; i++) {
|
||||
struct regmap_config u2p_regmap_config = {
|
||||
.reg_bits = 8,
|
||||
.val_bits = 32,
|
||||
@@ -659,6 +659,9 @@ static int dwc3_meson_g12a_setup_regmaps(struct dwc3_meson_g12a *priv,
|
||||
.max_register = U2P_R1,
|
||||
};
|
||||
|
||||
if (!strstr(priv->drvdata->phy_names[i], "usb2"))
|
||||
continue;
|
||||
|
||||
u2p_regmap_config.name = devm_kasprintf(priv->dev, GFP_KERNEL,
|
||||
"u2p-%d", i);
|
||||
if (!u2p_regmap_config.name)
|
||||
@@ -772,13 +775,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
|
||||
|
||||
ret = priv->drvdata->usb_init(priv);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_disable_regulator;
|
||||
|
||||
/* Init PHYs */
|
||||
for (i = 0 ; i < PHY_COUNT ; ++i) {
|
||||
ret = phy_init(priv->phys[i]);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_disable_regulator;
|
||||
}
|
||||
|
||||
/* Set PHY Power */
|
||||
@@ -816,6 +819,10 @@ err_phys_exit:
|
||||
for (i = 0 ; i < PHY_COUNT ; ++i)
|
||||
phy_exit(priv->phys[i]);
|
||||
|
||||
err_disable_regulator:
|
||||
if (priv->vbus)
|
||||
regulator_disable(priv->vbus);
|
||||
|
||||
err_disable_clks:
|
||||
clk_bulk_disable_unprepare(priv->drvdata->num_clks,
|
||||
priv->drvdata->clks);
|
||||
|
||||
@@ -292,6 +292,9 @@ static struct dwc3_ep *dwc3_wIndex_to_dep(struct dwc3 *dwc, __le16 wIndex_le)
|
||||
epnum |= 1;
|
||||
|
||||
dep = dwc->eps[epnum];
|
||||
if (dep == NULL)
|
||||
return NULL;
|
||||
|
||||
if (dep->flags & DWC3_EP_ENABLED)
|
||||
return dep;
|
||||
|
||||
|
||||
@@ -4072,6 +4072,7 @@ err5:
|
||||
dwc3_gadget_free_endpoints(dwc);
|
||||
err4:
|
||||
usb_put_gadget(dwc->gadget);
|
||||
dwc->gadget = NULL;
|
||||
err3:
|
||||
dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,
|
||||
dwc->bounce_addr);
|
||||
@@ -4091,6 +4092,9 @@ err0:
|
||||
|
||||
void dwc3_gadget_exit(struct dwc3 *dwc)
|
||||
{
|
||||
if (!dwc->gadget)
|
||||
return;
|
||||
|
||||
usb_del_gadget(dwc->gadget);
|
||||
dwc3_gadget_free_endpoints(dwc);
|
||||
usb_put_gadget(dwc->gadget);
|
||||
|
||||
@@ -495,7 +495,7 @@ static int eem_unwrap(struct gether *port,
|
||||
skb2 = skb_clone(skb, GFP_ATOMIC);
|
||||
if (unlikely(!skb2)) {
|
||||
DBG(cdev, "unable to unframe EEM packet\n");
|
||||
continue;
|
||||
goto next;
|
||||
}
|
||||
skb_trim(skb2, len - ETH_FCS_LEN);
|
||||
|
||||
@@ -505,7 +505,7 @@ static int eem_unwrap(struct gether *port,
|
||||
GFP_ATOMIC);
|
||||
if (unlikely(!skb3)) {
|
||||
dev_kfree_skb_any(skb2);
|
||||
continue;
|
||||
goto next;
|
||||
}
|
||||
dev_kfree_skb_any(skb2);
|
||||
skb_queue_tail(list, skb3);
|
||||
|
||||
@@ -2009,9 +2009,8 @@ static void musb_pm_runtime_check_session(struct musb *musb)
|
||||
schedule_delayed_work(&musb->irq_work,
|
||||
msecs_to_jiffies(1000));
|
||||
musb->quirk_retries--;
|
||||
break;
|
||||
}
|
||||
fallthrough;
|
||||
break;
|
||||
case MUSB_QUIRK_B_INVALID_VBUS_91:
|
||||
if (musb->quirk_retries && !musb->flush_irq_work) {
|
||||
musb_dbg(musb,
|
||||
|
||||
@@ -533,6 +533,12 @@ struct cp210x_single_port_config {
|
||||
#define CP210X_2NCONFIG_GPIO_RSTLATCH_IDX 587
|
||||
#define CP210X_2NCONFIG_GPIO_CONTROL_IDX 600
|
||||
|
||||
/* CP2102N QFN20 port configuration values */
|
||||
#define CP2102N_QFN20_GPIO2_TXLED_MODE BIT(2)
|
||||
#define CP2102N_QFN20_GPIO3_RXLED_MODE BIT(3)
|
||||
#define CP2102N_QFN20_GPIO1_RS485_MODE BIT(4)
|
||||
#define CP2102N_QFN20_GPIO0_CLK_MODE BIT(6)
|
||||
|
||||
/* CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x2 bytes. */
|
||||
struct cp210x_gpio_write {
|
||||
u8 mask;
|
||||
@@ -1884,7 +1890,19 @@ static int cp2102n_gpioconf_init(struct usb_serial *serial)
|
||||
priv->gpio_pushpull = (gpio_pushpull >> 3) & 0x0f;
|
||||
|
||||
/* 0 indicates GPIO mode, 1 is alternate function */
|
||||
priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f;
|
||||
if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN20) {
|
||||
/* QFN20 is special... */
|
||||
if (gpio_ctrl & CP2102N_QFN20_GPIO0_CLK_MODE) /* GPIO 0 */
|
||||
priv->gpio_altfunc |= BIT(0);
|
||||
if (gpio_ctrl & CP2102N_QFN20_GPIO1_RS485_MODE) /* GPIO 1 */
|
||||
priv->gpio_altfunc |= BIT(1);
|
||||
if (gpio_ctrl & CP2102N_QFN20_GPIO2_TXLED_MODE) /* GPIO 2 */
|
||||
priv->gpio_altfunc |= BIT(2);
|
||||
if (gpio_ctrl & CP2102N_QFN20_GPIO3_RXLED_MODE) /* GPIO 3 */
|
||||
priv->gpio_altfunc |= BIT(3);
|
||||
} else {
|
||||
priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f;
|
||||
}
|
||||
|
||||
if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN28) {
|
||||
/*
|
||||
|
||||
@@ -611,6 +611,7 @@ static const struct usb_device_id id_table_combined[] = {
|
||||
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONMX_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
|
||||
|
||||
@@ -581,6 +581,7 @@
|
||||
#define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */
|
||||
#define FTDI_NT_ORIONLX_PLUS_PID 0x7c91 /* OrionLX+ Substation Automation Platform */
|
||||
#define FTDI_NT_ORION_IO_PID 0x7c92 /* Orion I/O */
|
||||
#define FTDI_NT_ORIONMX_PID 0x7c93 /* OrionMX */
|
||||
|
||||
/*
|
||||
* Synapse Wireless product ids (FTDI_VID)
|
||||
|
||||
@@ -26,6 +26,7 @@
|
||||
|
||||
#define ZYXEL_VENDOR_ID 0x0586
|
||||
#define ZYXEL_OMNINET_ID 0x1000
|
||||
#define ZYXEL_OMNI_56K_PLUS_ID 0x1500
|
||||
/* This one seems to be a re-branded ZyXEL device */
|
||||
#define BT_IGNITIONPRO_ID 0x2000
|
||||
|
||||
@@ -40,6 +41,7 @@ static int omninet_port_remove(struct usb_serial_port *port);
|
||||
|
||||
static const struct usb_device_id id_table[] = {
|
||||
{ USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNINET_ID) },
|
||||
{ USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNI_56K_PLUS_ID) },
|
||||
{ USB_DEVICE(ZYXEL_VENDOR_ID, BT_IGNITIONPRO_ID) },
|
||||
{ } /* Terminating entry */
|
||||
};
|
||||
|
||||
@@ -416,7 +416,7 @@ static void qt2_close(struct usb_serial_port *port)
|
||||
|
||||
/* flush the port transmit buffer */
|
||||
i = usb_control_msg(serial->dev,
|
||||
usb_rcvctrlpipe(serial->dev, 0),
|
||||
usb_sndctrlpipe(serial->dev, 0),
|
||||
QT2_FLUSH_DEVICE, 0x40, 1,
|
||||
port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);
|
||||
|
||||
@@ -426,7 +426,7 @@ static void qt2_close(struct usb_serial_port *port)
|
||||
|
||||
/* flush the port receive buffer */
|
||||
i = usb_control_msg(serial->dev,
|
||||
usb_rcvctrlpipe(serial->dev, 0),
|
||||
usb_sndctrlpipe(serial->dev, 0),
|
||||
QT2_FLUSH_DEVICE, 0x40, 0,
|
||||
port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);
|
||||
|
||||
@@ -654,7 +654,7 @@ static int qt2_attach(struct usb_serial *serial)
|
||||
int status;
|
||||
|
||||
/* power on unit */
|
||||
status = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
|
||||
status = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),
|
||||
0xc2, 0x40, 0x8000, 0, NULL, 0,
|
||||
QT2_USB_TIMEOUT);
|
||||
if (status < 0) {
|
||||
|
||||
@@ -586,6 +586,11 @@ static int pmc_usb_probe_iom(struct pmc_usb *pmc)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (IS_ERR(pmc->iom_base)) {
|
||||
put_device(&adev->dev);
|
||||
return PTR_ERR(pmc->iom_base);
|
||||
}
|
||||
|
||||
pmc->iom_adev = adev;
|
||||
|
||||
return 0;
|
||||
@@ -636,8 +641,10 @@ static int pmc_usb_probe(struct platform_device *pdev)
|
||||
break;
|
||||
|
||||
ret = pmc_usb_register_port(pmc, i, fwnode);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
fwnode_handle_put(fwnode);
|
||||
goto err_remove_ports;
|
||||
}
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, pmc);
|
||||
|
||||
@@ -378,7 +378,7 @@ static int wcove_pd_transmit(struct tcpc_dev *tcpc,
|
||||
const u8 *data = (void *)msg;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < pd_header_cnt(msg->header) * 4 + 2; i++) {
|
||||
for (i = 0; i < pd_header_cnt_le(msg->header) * 4 + 2; i++) {
|
||||
ret = regmap_write(wcove->regmap, USBC_TX_DATA + i,
|
||||
data[i]);
|
||||
if (ret)
|
||||
|
||||
@@ -2467,6 +2467,24 @@ static int validate_super(struct btrfs_fs_info *fs_info,
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
|
||||
BTRFS_FSID_SIZE)) {
|
||||
btrfs_err(fs_info,
|
||||
"superblock fsid doesn't match fsid of fs_devices: %pU != %pU",
|
||||
fs_info->super_copy->fsid, fs_info->fs_devices->fsid);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
if (btrfs_fs_incompat(fs_info, METADATA_UUID) &&
|
||||
memcmp(fs_info->fs_devices->metadata_uuid,
|
||||
fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) {
|
||||
btrfs_err(fs_info,
|
||||
"superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU",
|
||||
fs_info->super_copy->metadata_uuid,
|
||||
fs_info->fs_devices->metadata_uuid);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid,
|
||||
BTRFS_FSID_SIZE) != 0) {
|
||||
btrfs_err(fs_info,
|
||||
@@ -2969,14 +2987,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
|
||||
|
||||
disk_super = fs_info->super_copy;
|
||||
|
||||
ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,
|
||||
BTRFS_FSID_SIZE));
|
||||
|
||||
if (btrfs_fs_incompat(fs_info, METADATA_UUID)) {
|
||||
ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid,
|
||||
fs_info->super_copy->metadata_uuid,
|
||||
BTRFS_FSID_SIZE));
|
||||
}
|
||||
|
||||
features = btrfs_super_flags(disk_super);
|
||||
if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
|
||||
|
||||
@@ -1088,7 +1088,7 @@ int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
|
||||
int del_nr = 0;
|
||||
int del_slot = 0;
|
||||
int recow;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
u64 ino = btrfs_ino(inode);
|
||||
|
||||
path = btrfs_alloc_path();
|
||||
@@ -1309,7 +1309,7 @@ again:
|
||||
}
|
||||
out:
|
||||
btrfs_free_path(path);
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -406,7 +406,7 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
|
||||
|
||||
if (cl_init->hostname == NULL) {
|
||||
WARN_ON(1);
|
||||
return NULL;
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* see if the client already exists */
|
||||
|
||||
@@ -205,6 +205,7 @@ struct nfs4_exception {
|
||||
struct inode *inode;
|
||||
nfs4_stateid *stateid;
|
||||
long timeout;
|
||||
unsigned char task_is_privileged : 1;
|
||||
unsigned char delay : 1,
|
||||
recovering : 1,
|
||||
retry : 1;
|
||||
|
||||
@@ -435,8 +435,8 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
|
||||
*/
|
||||
nfs_mark_client_ready(clp, -EPERM);
|
||||
}
|
||||
nfs_put_client(clp);
|
||||
clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags);
|
||||
nfs_put_client(clp);
|
||||
return old;
|
||||
|
||||
error:
|
||||
|
||||
@@ -592,6 +592,8 @@ int nfs4_handle_exception(struct nfs_server *server, int errorcode, struct nfs4_
|
||||
goto out_retry;
|
||||
}
|
||||
if (exception->recovering) {
|
||||
if (exception->task_is_privileged)
|
||||
return -EDEADLOCK;
|
||||
ret = nfs4_wait_clnt_recover(clp);
|
||||
if (test_bit(NFS_MIG_FAILED, &server->mig_status))
|
||||
return -EIO;
|
||||
@@ -617,6 +619,8 @@ nfs4_async_handle_exception(struct rpc_task *task, struct nfs_server *server,
|
||||
goto out_retry;
|
||||
}
|
||||
if (exception->recovering) {
|
||||
if (exception->task_is_privileged)
|
||||
return -EDEADLOCK;
|
||||
rpc_sleep_on(&clp->cl_rpcwaitq, task, NULL);
|
||||
if (test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) == 0)
|
||||
rpc_wake_up_queued_task(&clp->cl_rpcwaitq, task);
|
||||
@@ -5942,6 +5946,14 @@ static int nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t buflen
|
||||
do {
|
||||
err = __nfs4_proc_set_acl(inode, buf, buflen);
|
||||
trace_nfs4_set_acl(inode, err);
|
||||
if (err == -NFS4ERR_BADOWNER || err == -NFS4ERR_BADNAME) {
|
||||
/*
|
||||
* no need to retry since the kernel
|
||||
* isn't involved in encoding the ACEs.
|
||||
*/
|
||||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
err = nfs4_handle_exception(NFS_SERVER(inode), err,
|
||||
&exception);
|
||||
} while (exception.retry);
|
||||
@@ -6383,6 +6395,7 @@ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata)
|
||||
struct nfs4_exception exception = {
|
||||
.inode = data->inode,
|
||||
.stateid = &data->stateid,
|
||||
.task_is_privileged = data->args.seq_args.sa_privileged,
|
||||
};
|
||||
|
||||
if (!nfs4_sequence_done(task, &data->res.seq_res))
|
||||
@@ -6506,7 +6519,6 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
|
||||
data = kzalloc(sizeof(*data), GFP_NOFS);
|
||||
if (data == NULL)
|
||||
return -ENOMEM;
|
||||
nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 0);
|
||||
|
||||
nfs4_state_protect(server->nfs_client,
|
||||
NFS_SP4_MACH_CRED_CLEANUP,
|
||||
@@ -6537,6 +6549,12 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
|
||||
}
|
||||
}
|
||||
|
||||
if (!data->inode)
|
||||
nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1,
|
||||
1);
|
||||
else
|
||||
nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1,
|
||||
0);
|
||||
task_setup_data.callback_data = data;
|
||||
msg.rpc_argp = &data->args;
|
||||
msg.rpc_resp = &data->res;
|
||||
@@ -9622,15 +9640,20 @@ int nfs4_proc_layoutreturn(struct nfs4_layoutreturn *lrp, bool sync)
|
||||
&task_setup_data.rpc_client, &msg);
|
||||
|
||||
dprintk("--> %s\n", __func__);
|
||||
lrp->inode = nfs_igrab_and_active(lrp->args.inode);
|
||||
if (!sync) {
|
||||
lrp->inode = nfs_igrab_and_active(lrp->args.inode);
|
||||
if (!lrp->inode) {
|
||||
nfs4_layoutreturn_release(lrp);
|
||||
return -EAGAIN;
|
||||
}
|
||||
task_setup_data.flags |= RPC_TASK_ASYNC;
|
||||
}
|
||||
nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0);
|
||||
if (!lrp->inode)
|
||||
nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
|
||||
1);
|
||||
else
|
||||
nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
|
||||
0);
|
||||
task = rpc_run_task(&task_setup_data);
|
||||
if (IS_ERR(task))
|
||||
return PTR_ERR(task);
|
||||
|
||||
@@ -2676,6 +2676,13 @@ out:
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SECURITY
|
||||
static int proc_pid_attr_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
file->private_data = NULL;
|
||||
__mem_open(inode, file, PTRACE_MODE_READ_FSCREDS);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t proc_pid_attr_read(struct file * file, char __user * buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
@@ -2706,7 +2713,7 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
|
||||
int rv;
|
||||
|
||||
/* A task may only write when it was the opener. */
|
||||
if (file->f_cred != current_real_cred())
|
||||
if (file->private_data != current->mm)
|
||||
return -EPERM;
|
||||
|
||||
rcu_read_lock();
|
||||
@@ -2756,9 +2763,11 @@ out:
|
||||
}
|
||||
|
||||
static const struct file_operations proc_pid_attr_operations = {
|
||||
.open = proc_pid_attr_open,
|
||||
.read = proc_pid_attr_read,
|
||||
.write = proc_pid_attr_write,
|
||||
.llseek = generic_file_llseek,
|
||||
.release = mem_release,
|
||||
};
|
||||
|
||||
#define LSM_DIR_OPS(LSM) \
|
||||
|
||||
@@ -1012,6 +1012,7 @@
|
||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||
#define PERCPU_DECRYPTED_SECTION \
|
||||
. = ALIGN(PAGE_SIZE); \
|
||||
*(.data..decrypted) \
|
||||
*(.data..percpu..decrypted) \
|
||||
. = ALIGN(PAGE_SIZE);
|
||||
#else
|
||||
|
||||
@@ -1104,7 +1104,15 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn)
|
||||
static inline unsigned long
|
||||
__gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
|
||||
{
|
||||
return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;
|
||||
/*
|
||||
* The index was checked originally in search_memslots. To avoid
|
||||
* that a malicious guest builds a Spectre gadget out of e.g. page
|
||||
* table walks, do not let the processor speculate loads outside
|
||||
* the guest's registered memslots.
|
||||
*/
|
||||
unsigned long offset = gfn - slot->base_gfn;
|
||||
offset = array_index_nospec(offset, slot->npages);
|
||||
return slot->userspace_addr + offset * PAGE_SIZE;
|
||||
}
|
||||
|
||||
static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
|
||||
|
||||
@@ -26,11 +26,11 @@ enum {
|
||||
BD71828_REGULATOR_AMOUNT,
|
||||
};
|
||||
|
||||
#define BD71828_BUCK1267_VOLTS 0xEF
|
||||
#define BD71828_BUCK3_VOLTS 0x10
|
||||
#define BD71828_BUCK4_VOLTS 0x20
|
||||
#define BD71828_BUCK5_VOLTS 0x10
|
||||
#define BD71828_LDO_VOLTS 0x32
|
||||
#define BD71828_BUCK1267_VOLTS 0x100
|
||||
#define BD71828_BUCK3_VOLTS 0x20
|
||||
#define BD71828_BUCK4_VOLTS 0x40
|
||||
#define BD71828_BUCK5_VOLTS 0x20
|
||||
#define BD71828_LDO_VOLTS 0x40
|
||||
/* LDO6 is fixed 1.8V voltage */
|
||||
#define BD71828_LDO_6_VOLTAGE 1800000
|
||||
|
||||
|
||||
@@ -631,6 +631,7 @@ struct mlx4_caps {
|
||||
bool wol_port[MLX4_MAX_PORTS + 1];
|
||||
struct mlx4_rate_limit_caps rl_caps;
|
||||
u32 health_buffer_addrs;
|
||||
bool map_clock_to_user;
|
||||
};
|
||||
|
||||
struct mlx4_buf_list {
|
||||
|
||||
@@ -350,11 +350,19 @@ struct load_weight {
|
||||
* Only for tasks we track a moving average of the past instantaneous
|
||||
* estimated utilization. This allows to absorb sporadic drops in utilization
|
||||
* of an otherwise almost periodic task.
|
||||
*
|
||||
* The UTIL_AVG_UNCHANGED flag is used to synchronize util_est with util_avg
|
||||
* updates. When a task is dequeued, its util_est should not be updated if its
|
||||
* util_avg has not been updated in the meantime.
|
||||
* This information is mapped into the MSB bit of util_est.enqueued at dequeue
|
||||
* time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg
|
||||
* for a task) it is safe to use MSB.
|
||||
*/
|
||||
struct util_est {
|
||||
unsigned int enqueued;
|
||||
unsigned int ewma;
|
||||
#define UTIL_EST_WEIGHT_SHIFT 2
|
||||
#define UTIL_AVG_UNCHANGED 0x80000000
|
||||
} __attribute__((__aligned__(sizeof(u64))));
|
||||
|
||||
/*
|
||||
|
||||
@@ -4960,6 +4960,12 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
|
||||
m->ret_size = ret;
|
||||
|
||||
for (i = 0; i < nargs; i++) {
|
||||
if (i == nargs - 1 && args[i].type == 0) {
|
||||
bpf_log(log,
|
||||
"The function %s with variable args is unsupported.\n",
|
||||
tname);
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = __get_type_size(btf, args[i].type, &t);
|
||||
if (ret < 0) {
|
||||
bpf_log(log,
|
||||
@@ -4967,6 +4973,12 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
|
||||
tname, i, btf_kind_str[BTF_INFO_KIND(t->info)]);
|
||||
return -EINVAL;
|
||||
}
|
||||
if (ret == 0) {
|
||||
bpf_log(log,
|
||||
"The function %s has malformed void argument.\n",
|
||||
tname);
|
||||
return -EINVAL;
|
||||
}
|
||||
m->arg_size[i] = ret;
|
||||
}
|
||||
m->nr_args = nargs;
|
||||
|
||||
@@ -823,6 +823,10 @@ static int cgroup1_rename(struct kernfs_node *kn, struct kernfs_node *new_parent
|
||||
struct cgroup *cgrp = kn->priv;
|
||||
int ret;
|
||||
|
||||
/* do not accept '\n' to prevent making /proc/<pid>/cgroup unparsable */
|
||||
if (strchr(new_name_str, '\n'))
|
||||
return -EINVAL;
|
||||
|
||||
if (kernfs_type(kn) != KERNFS_DIR)
|
||||
return -ENOTDIR;
|
||||
if (kn->parent != new_parent)
|
||||
|
||||
@@ -5677,8 +5677,6 @@ int __init cgroup_init_early(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u16 cgroup_disable_mask __initdata;
|
||||
|
||||
/**
|
||||
* cgroup_init - cgroup initialization
|
||||
*
|
||||
@@ -5737,12 +5735,8 @@ int __init cgroup_init(void)
|
||||
* disabled flag and cftype registration needs kmalloc,
|
||||
* both of which aren't available during early_init.
|
||||
*/
|
||||
if (cgroup_disable_mask & (1 << ssid)) {
|
||||
static_branch_disable(cgroup_subsys_enabled_key[ssid]);
|
||||
printk(KERN_INFO "Disabling %s control group subsystem\n",
|
||||
ss->name);
|
||||
if (!cgroup_ssid_enabled(ssid))
|
||||
continue;
|
||||
}
|
||||
|
||||
if (cgroup1_ssid_disabled(ssid))
|
||||
printk(KERN_INFO "Disabling %s control group subsystem in v1 mounts\n",
|
||||
@@ -6257,7 +6251,10 @@ static int __init cgroup_disable(char *str)
|
||||
if (strcmp(token, ss->name) &&
|
||||
strcmp(token, ss->legacy_name))
|
||||
continue;
|
||||
cgroup_disable_mask |= 1 << i;
|
||||
|
||||
static_branch_disable(cgroup_subsys_enabled_key[i]);
|
||||
pr_info("Disabling %s control group subsystem\n",
|
||||
ss->name);
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
|
||||
@@ -4548,7 +4548,9 @@ find_get_context(struct pmu *pmu, struct task_struct *task,
|
||||
cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
|
||||
ctx = &cpuctx->ctx;
|
||||
get_ctx(ctx);
|
||||
raw_spin_lock_irqsave(&ctx->lock, flags);
|
||||
++ctx->pin_count;
|
||||
raw_spin_unlock_irqrestore(&ctx->lock, flags);
|
||||
|
||||
return ctx;
|
||||
}
|
||||
|
||||
@@ -890,6 +890,7 @@ __initcall(init_sched_debug_procfs);
|
||||
#define __PS(S, F) SEQ_printf(m, "%-45s:%21Ld\n", S, (long long)(F))
|
||||
#define __P(F) __PS(#F, F)
|
||||
#define P(F) __PS(#F, p->F)
|
||||
#define PM(F, M) __PS(#F, p->F & (M))
|
||||
#define __PSN(S, F) SEQ_printf(m, "%-45s:%14Ld.%06ld\n", S, SPLIT_NS((long long)(F)))
|
||||
#define __PN(F) __PSN(#F, F)
|
||||
#define PN(F) __PSN(#F, p->F)
|
||||
@@ -1016,7 +1017,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
|
||||
P(se.avg.util_avg);
|
||||
P(se.avg.last_update_time);
|
||||
P(se.avg.util_est.ewma);
|
||||
P(se.avg.util_est.enqueued);
|
||||
PM(se.avg.util_est.enqueued, ~UTIL_AVG_UNCHANGED);
|
||||
#endif
|
||||
#ifdef CONFIG_UCLAMP_TASK
|
||||
__PS("uclamp.min", p->uclamp_req[UCLAMP_MIN].value);
|
||||
|
||||
@@ -3510,10 +3510,9 @@ update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cf
|
||||
static inline void
|
||||
update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
|
||||
{
|
||||
long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;
|
||||
long delta, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;
|
||||
unsigned long load_avg;
|
||||
u64 load_sum = 0;
|
||||
s64 delta_sum;
|
||||
u32 divider;
|
||||
|
||||
if (!runnable_sum)
|
||||
@@ -3560,13 +3559,13 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
|
||||
load_sum = (s64)se_weight(se) * runnable_sum;
|
||||
load_avg = div_s64(load_sum, divider);
|
||||
|
||||
delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum;
|
||||
delta_avg = load_avg - se->avg.load_avg;
|
||||
delta = load_avg - se->avg.load_avg;
|
||||
|
||||
se->avg.load_sum = runnable_sum;
|
||||
se->avg.load_avg = load_avg;
|
||||
add_positive(&cfs_rq->avg.load_avg, delta_avg);
|
||||
add_positive(&cfs_rq->avg.load_sum, delta_sum);
|
||||
|
||||
add_positive(&cfs_rq->avg.load_avg, delta);
|
||||
cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;
|
||||
}
|
||||
|
||||
static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum)
|
||||
@@ -3916,7 +3915,7 @@ static inline unsigned long _task_util_est(struct task_struct *p)
|
||||
{
|
||||
struct util_est ue = READ_ONCE(p->se.avg.util_est);
|
||||
|
||||
return (max(ue.ewma, ue.enqueued) | UTIL_AVG_UNCHANGED);
|
||||
return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED));
|
||||
}
|
||||
|
||||
static inline unsigned long task_util_est(struct task_struct *p)
|
||||
@@ -4021,7 +4020,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
|
||||
* Reset EWMA on utilization increases, the moving average is used only
|
||||
* to smooth utilization decreases.
|
||||
*/
|
||||
ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);
|
||||
ue.enqueued = task_util(p);
|
||||
if (sched_feat(UTIL_EST_FASTUP)) {
|
||||
if (ue.ewma < ue.enqueued) {
|
||||
ue.ewma = ue.enqueued;
|
||||
@@ -4070,6 +4069,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
|
||||
ue.ewma += last_ewma_diff;
|
||||
ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
|
||||
done:
|
||||
ue.enqueued |= UTIL_AVG_UNCHANGED;
|
||||
WRITE_ONCE(p->se.avg.util_est, ue);
|
||||
|
||||
trace_sched_util_est_se_tp(&p->se);
|
||||
@@ -8106,7 +8106,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
|
||||
/* Propagate pending load changes to the parent, if any: */
|
||||
se = cfs_rq->tg->se[cpu];
|
||||
if (se && !skip_blocked_update(se))
|
||||
update_load_avg(cfs_rq_of(se), se, 0);
|
||||
update_load_avg(cfs_rq_of(se), se, UPDATE_TG);
|
||||
|
||||
/*
|
||||
* There can be a lot of idle CPU cgroups. Don't let fully
|
||||
|
||||
@@ -42,15 +42,6 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
|
||||
return LOAD_AVG_MAX - 1024 + avg->period_contrib;
|
||||
}
|
||||
|
||||
/*
|
||||
* When a task is dequeued, its estimated utilization should not be update if
|
||||
* its util_avg has not been updated at least once.
|
||||
* This flag is used to synchronize util_avg updates with util_est updates.
|
||||
* We map this information into the LSB bit of the utilization saved at
|
||||
* dequeue time (i.e. util_est.dequeued).
|
||||
*/
|
||||
#define UTIL_AVG_UNCHANGED 0x1
|
||||
|
||||
static inline void cfs_se_util_change(struct sched_avg *avg)
|
||||
{
|
||||
unsigned int enqueued;
|
||||
@@ -58,7 +49,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
|
||||
if (!sched_feat(UTIL_EST))
|
||||
return;
|
||||
|
||||
/* Avoid store if the flag has been already set */
|
||||
/* Avoid store if the flag has been already reset */
|
||||
enqueued = avg->util_est.enqueued;
|
||||
if (!(enqueued & UTIL_AVG_UNCHANGED))
|
||||
return;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user