Merge 5.15.87 into android14-5.15
Changes in 5.15.87
usb: dwc3: qcom: Fix memory leak in dwc3_qcom_interconnect_init
cifs: fix oops during encryption
Revert "selftests/bpf: Add test for unstable CT lookup API"
nvme-pci: fix doorbell buffer value endianness
nvme-pci: fix mempool alloc size
nvme-pci: fix page size checks
ACPI: resource: Skip IRQ override on Asus Vivobook K3402ZA/K3502ZA
ACPI: resource: do IRQ override on LENOVO IdeaPad
ACPI: resource: do IRQ override on XMG Core 15
ACPI: resource: do IRQ override on Lenovo 14ALC7
block, bfq: fix uaf for bfqq in bfq_exit_icq_bfqq
ata: ahci: Fix PCS quirk application for suspend
nvme: fix the NVME_CMD_EFFECTS_CSE_MASK definition
nvmet: don't defer passthrough commands with trivial effects to the workqueue
fs/ntfs3: Validate BOOT record_size
fs/ntfs3: Add overflow check for attribute size
fs/ntfs3: Validate data run offset
fs/ntfs3: Add null pointer check to attr_load_runs_vcn
fs/ntfs3: Fix memory leak on ntfs_fill_super() error path
fs/ntfs3: Add null pointer check for inode operations
fs/ntfs3: Validate attribute name offset
fs/ntfs3: Validate buffer length while parsing index
fs/ntfs3: Validate resident attribute name
fs/ntfs3: Fix slab-out-of-bounds read in run_unpack
soundwire: dmi-quirks: add quirk variant for LAPBC710 NUC15
fs/ntfs3: Validate index root when initialize NTFS security
fs/ntfs3: Use __GFP_NOWARN allocation at wnd_init()
fs/ntfs3: Use __GFP_NOWARN allocation at ntfs_fill_super()
fs/ntfs3: Delete duplicate condition in ntfs_read_mft()
fs/ntfs3: Fix slab-out-of-bounds in r_page
objtool: Fix SEGFAULT
powerpc/rtas: avoid device tree lookups in rtas_os_term()
powerpc/rtas: avoid scheduling in rtas_os_term()
HID: multitouch: fix Asus ExpertBook P2 P2451FA trackpoint
HID: plantronics: Additional PIDs for double volume key presses quirk
pstore: Properly assign mem_type property
pstore/zone: Use GFP_ATOMIC to allocate zone buffer
hfsplus: fix bug causing custom uid and gid being unable to be assigned with mount
binfmt: Fix error return code in load_elf_fdpic_binary()
ovl: Use ovl mounter's fsuid and fsgid in ovl_link()
ALSA: line6: correct midi status byte when receiving data from podxt
ALSA: line6: fix stack overflow in line6_midi_transmit
pnode: terminate at peers of source
mfd: mt6360: Add bounds checking in Regmap read/write call-backs
md: fix a crash in mempool_free
mm, compaction: fix fast_isolate_around() to stay within boundaries
f2fs: should put a page when checking the summary info
f2fs: allow to read node block after shutdown
mmc: vub300: fix warning - do not call blocking ops when !TASK_RUNNING
tpm: acpi: Call acpi_put_table() to fix memory leak
tpm: tpm_crb: Add the missed acpi_put_table() to fix memory leak
tpm: tpm_tis: Add the missed acpi_put_table() to fix memory leak
SUNRPC: Don't leak netobj memory when gss_read_proxy_verf() fails
kcsan: Instrument memcpy/memset/memmove with newer Clang
ASoC: Intel/SOF: use set_stream() instead of set_tdm_slots() for HDAudio
ASoC/SoundWire: dai: expand 'stream' concept beyond SoundWire
rcu-tasks: Simplify trc_read_check_handler() atomic operations
net/af_packet: add VLAN support for AF_PACKET SOCK_RAW GSO
net/af_packet: make sure to pull mac header
media: stv0288: use explicitly signed char
soc: qcom: Select REMAP_MMIO for LLCC driver
kest.pl: Fix grub2 menu handling for rebooting
ktest.pl minconfig: Unset configs instead of just removing them
jbd2: use the correct print format
perf/x86/intel/uncore: Disable I/O stacks to PMU mapping on ICX-D
perf/x86/intel/uncore: Clear attr_update properly
arm64: dts: qcom: sdm845-db845c: correct SPI2 pins drive strength
mmc: sdhci-sprd: Disable CLK_AUTO when the clock is less than 400K
btrfs: fix resolving backrefs for inline extent followed by prealloc
ARM: ux500: do not directly dereference __iomem
arm64: dts: qcom: sdm850-lenovo-yoga-c630: correct I2C12 pins drive strength
selftests: Use optional USERCFLAGS and USERLDFLAGS
PM/devfreq: governor: Add a private governor_data for governor
cpufreq: Init completion before kobject_init_and_add()
ALSA: patch_realtek: Fix Dell Inspiron Plus 16
ALSA: hda/realtek: Apply dual codec fixup for Dell Latitude laptops
fs: dlm: fix sock release if listen fails
fs: dlm: retry accept() until -EAGAIN or error returns
mptcp: mark ops structures as ro_after_init
mptcp: remove MPTCP 'ifdef' in TCP SYN cookies
dm cache: Fix ABBA deadlock between shrink_slab and dm_cache_metadata_abort
dm thin: Fix ABBA deadlock between shrink_slab and dm_pool_abort_metadata
dm thin: Use last transaction's pmd->root when commit failed
dm thin: resume even if in FAIL mode
dm thin: Fix UAF in run_timer_softirq()
dm integrity: Fix UAF in dm_integrity_dtr()
dm clone: Fix UAF in clone_dtr()
dm cache: Fix UAF in destroy()
dm cache: set needs_check flag after aborting metadata
tracing/hist: Fix out-of-bound write on 'action_data.var_ref_idx'
perf/core: Call LSM hook after copying perf_event_attr
of/kexec: Fix reading 32-bit "linux,initrd-{start,end}" values
KVM: VMX: Resume guest immediately when injecting #GP on ECREATE
KVM: nVMX: Inject #GP, not #UD, if "generic" VMXON CR0/CR4 check fails
KVM: nVMX: Properly expose ENABLE_USR_WAIT_PAUSE control to L1
x86/microcode/intel: Do not retry microcode reloading on the APs
ftrace/x86: Add back ftrace_expected for ftrace bug reports
x86/kprobes: Fix kprobes instruction boudary check with CONFIG_RETHUNK
x86/kprobes: Fix optprobe optimization check with CONFIG_RETHUNK
tracing: Fix race where eprobes can be called before the event
tracing: Fix complicated dependency of CONFIG_TRACER_MAX_TRACE
tracing/hist: Fix wrong return value in parse_action_params()
tracing/probes: Handle system names with hyphens
tracing: Fix infinite loop in tracing_read_pipe on overflowed print_trace_line
staging: media: tegra-video: fix chan->mipi value on error
staging: media: tegra-video: fix device_node use after free
ARM: 9256/1: NWFPE: avoid compiler-generated __aeabi_uldivmod
media: dvb-core: Fix double free in dvb_register_device()
media: dvb-core: Fix UAF due to refcount races at releasing
cifs: fix confusing debug message
cifs: fix missing display of three mount options
rtc: ds1347: fix value written to century register
block: mq-deadline: Do not break sequential write streams to zoned HDDs
md/bitmap: Fix bitmap chunk size overflow issues
efi: Add iMac Pro 2017 to uefi skip cert quirk
wifi: wilc1000: sdio: fix module autoloading
ASoC: jz4740-i2s: Handle independent FIFO flush bits
ipu3-imgu: Fix NULL pointer dereference in imgu_subdev_set_selection()
ipmi: fix long wait in unload when IPMI disconnect
mtd: spi-nor: Check for zero erase size in spi_nor_find_best_erase_type()
ima: Fix a potential NULL pointer access in ima_restore_measurement_list
ipmi: fix use after free in _ipmi_destroy_user()
PCI: Fix pci_device_is_present() for VFs by checking PF
PCI/sysfs: Fix double free in error path
riscv: stacktrace: Fixup ftrace_graph_ret_addr retp argument
riscv: mm: notify remote harts about mmu cache updates
crypto: n2 - add missing hash statesize
crypto: ccp - Add support for TEE for PCI ID 0x14CA
driver core: Fix bus_type.match() error handling in __driver_attach()
phy: qcom-qmp-combo: fix sc8180x reset
iommu/amd: Fix ivrs_acpihid cmdline parsing code
remoteproc: core: Do pm_relax when in RPROC_OFFLINE state
parisc: led: Fix potential null-ptr-deref in start_task()
device_cgroup: Roll back to original exceptions after copy failure
drm/connector: send hotplug uevent on connector cleanup
drm/vmwgfx: Validate the box size for the snooped cursor
drm/i915/dsi: fix VBT send packet port selection for dual link DSI
drm/ingenic: Fix missing platform_driver_unregister() call in ingenic_drm_init()
ext4: silence the warning when evicting inode with dioread_nolock
ext4: add inode table check in __ext4_get_inode_loc to aovid possible infinite loop
ext4: remove trailing newline from ext4_msg() message
fs: ext4: initialize fsdata in pagecache_write()
ext4: fix use-after-free in ext4_orphan_cleanup
ext4: fix undefined behavior in bit shift for ext4_check_flag_values
ext4: add EXT4_IGET_BAD flag to prevent unexpected bad inode
ext4: add helper to check quota inums
ext4: fix bug_on in __es_tree_search caused by bad quota inode
ext4: fix reserved cluster accounting in __es_remove_extent()
ext4: check and assert if marking an no_delete evicting inode dirty
ext4: fix bug_on in __es_tree_search caused by bad boot loader inode
ext4: fix leaking uninitialized memory in fast-commit journal
ext4: fix uninititialized value in 'ext4_evict_inode'
ext4: init quota for 'old.inode' in 'ext4_rename'
ext4: fix delayed allocation bug in ext4_clu_mapped for bigalloc + inline
ext4: fix corruption when online resizing a 1K bigalloc fs
ext4: fix error code return to user-space in ext4_get_branch()
ext4: avoid BUG_ON when creating xattrs
ext4: fix kernel BUG in 'ext4_write_inline_data_end()'
ext4: fix inode leak in ext4_xattr_inode_create() on an error path
ext4: initialize quota before expanding inode in setproject ioctl
ext4: avoid unaccounted block allocation when expanding inode
ext4: allocate extended attribute value in vmalloc area
drm/amdgpu: handle polaris10/11 overlap asics (v2)
drm/amdgpu: make display pinning more flexible (v2)
block: mq-deadline: Fix dd_finish_request() for zoned devices
tracing: Fix issue of missing one synthetic field
ext4: remove unused enum EXT4_FC_COMMIT_FAILED
ext4: use ext4_debug() instead of jbd_debug()
ext4: introduce EXT4_FC_TAG_BASE_LEN helper
ext4: factor out ext4_fc_get_tl()
ext4: fix potential out of bound read in ext4_fc_replay_scan()
ext4: disable fast-commit of encrypted dir operations
ext4: don't set up encryption key during jbd2 transaction
ext4: add missing validation of fast-commit record lengths
ext4: fix unaligned memory access in ext4_fc_reserve_space()
ext4: fix off-by-one errors in fast-commit block filling
ARM: renumber bits related to _TIF_WORK_MASK
phy: qcom-qmp-combo: fix out-of-bounds clock access
btrfs: replace strncpy() with strscpy()
btrfs: move missing device handling in a dedicate function
btrfs: fix extent map use-after-free when handling missing device in read_one_chunk
x86/mce: Get rid of msr_ops
x86/MCE/AMD: Clear DFR errors found in THR handler
media: s5p-mfc: Fix to handle reference queue during finishing
media: s5p-mfc: Clear workbit to handle error condition
media: s5p-mfc: Fix in register read and write for H264
perf probe: Use dwarf_attr_integrate as generic DWARF attr accessor
perf probe: Fix to get the DW_AT_decl_file and DW_AT_call_file as unsinged data
ravb: Fix "failed to switch device to config mode" message during unbind
ext4: goto right label 'failed_mount3a'
ext4: correct inconsistent error msg in nojournal mode
mbcache: automatically delete entries from cache on freeing
ext4: fix deadlock due to mbcache entry corruption
drm/i915/migrate: don't check the scratch page
drm/i915/migrate: fix offset calculation
drm/i915/migrate: fix length calculation
SUNRPC: ensure the matching upcall is in-flight upon downcall
btrfs: fix an error handling path in btrfs_defrag_leaves()
bpf: pull before calling skb_postpull_rcsum()
drm/panfrost: Fix GEM handle creation ref-counting
netfilter: nf_tables: consolidate set description
netfilter: nf_tables: add function to create set stateful expressions
netfilter: nf_tables: perform type checking for existing sets
vmxnet3: correctly report csum_level for encapsulated packet
netfilter: nf_tables: honor set timeout and garbage collection updates
veth: Fix race with AF_XDP exposing old or uninitialized descriptors
nfsd: shut down the NFSv4 state objects before the filecache
net: hns3: add interrupts re-initialization while doing VF FLR
net: hns3: refactor hns3_nic_reuse_page()
net: hns3: extract macro to simplify ring stats update code
net: hns3: fix miss L3E checking for rx packet
net: hns3: fix VF promisc mode not update when mac table full
net: sched: fix memory leak in tcindex_set_parms
qlcnic: prevent ->dcb use-after-free on qlcnic_dcb_enable() failure
net: dsa: mv88e6xxx: depend on PTP conditionally
nfc: Fix potential resource leaks
vdpa_sim: fix possible memory leak in vdpasim_net_init() and vdpasim_blk_init()
vhost/vsock: Fix error handling in vhost_vsock_init()
vringh: fix range used in iotlb_translate()
vhost: fix range used in translate_desc()
vdpa_sim: fix vringh initialization in vdpasim_queue_ready()
net/mlx5: E-Switch, properly handle ingress tagged packets on VST
net/mlx5: Add forgotten cleanup calls into mlx5_init_once() error path
net/mlx5: Avoid recovery in probe flows
net/mlx5e: IPoIB, Don't allow CQE compression to be turned on by default
net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attr
net/mlx5e: Always clear dest encap in neigh-update-del
net/mlx5e: Fix hw mtu initializing at XDP SQ allocation
net: amd-xgbe: add missed tasklet_kill
net: ena: Fix toeplitz initial hash value
net: ena: Don't register memory info on XDP exchange
net: ena: Account for the number of processed bytes in XDP
net: ena: Use bitmask to indicate packet redirection
net: ena: Fix rx_copybreak value update
net: ena: Set default value for RX interrupt moderation
net: ena: Update NUMA TPH hint register upon NUMA node update
net: phy: xgmiitorgmii: Fix refcount leak in xgmiitorgmii_probe
RDMA/mlx5: Fix mlx5_ib_get_hw_stats when used for device
RDMA/mlx5: Fix validation of max_rd_atomic caps for DC
drm/meson: Reduce the FIFO lines held when AFBC is not used
filelock: new helper: vfs_inode_has_locks
ceph: switch to vfs_inode_has_locks() to fix file lock bug
gpio: sifive: Fix refcount leak in sifive_gpio_probe
net: sched: atm: dont intepret cls results when asked to drop
net: sched: cbq: dont intepret cls results when asked to drop
net: sparx5: Fix reading of the MAC address
netfilter: ipset: fix hash:net,port,net hang with /0 subnet
netfilter: ipset: Rework long task execution when adding/deleting entries
perf tools: Fix resources leak in perf_data__open_dir()
drm/imx: ipuv3-plane: Fix overlay plane width
fs/ntfs3: don't hold ni_lock when calling truncate_setsize()
drivers/net/bonding/bond_3ad: return when there's no aggregator
octeontx2-pf: Fix lmtst ID used in aura free
usb: rndis_host: Secure rndis_query check against int overflow
perf stat: Fix handling of --for-each-cgroup with --bpf-counters to match non BPF mode
drm/i915: unpin on error in intel_vgpu_shadow_mm_pin()
caif: fix memory leak in cfctrl_linkup_request()
udf: Fix extension of the last extent in the file
ASoC: Intel: bytcr_rt5640: Add quirk for the Advantech MICA-071 tablet
nvme: fix multipath crash caused by flush request when blktrace is enabled
io_uring: check for valid register opcode earlier
nvmet: use NVME_CMD_EFFECTS_CSUPP instead of open coding it
nvme: also return I/O command effects from nvme_command_effects
btrfs: check superblock to ensure the fs was not modified at thaw time
x86/kexec: Fix double-free of elf header buffer
x86/bugs: Flush IBP in ib_prctl_set()
nfsd: fix handling of readdir in v4root vs. mount upcall timeout
fbdev: matroxfb: G200eW: Increase max memory from 1 MB to 16 MB
block: don't allow splitting of a REQ_NOWAIT bio
io_uring: fix CQ waiting timeout handling
thermal: int340x: Add missing attribute for data rate base
riscv: uaccess: fix type of 0 variable on error in get_user()
riscv, kprobes: Stricter c.jr/c.jalr decoding
drm/i915/gvt: fix gvt debugfs destroy
drm/i915/gvt: fix vgpu debugfs clean in remove
hfs/hfsplus: use WARN_ON for sanity check
hfs/hfsplus: avoid WARN_ON() for sanity check, use proper error handling
ksmbd: fix infinite loop in ksmbd_conn_handler_loop()
ksmbd: check nt_len to be at least CIFS_ENCPWD_SIZE in ksmbd_decode_ntlmssp_auth_blob
Revert "ACPI: PM: Add support for upcoming AMD uPEP HID AMDI007"
mptcp: dedicated request sock for subflow in v6
mptcp: use proper req destructor for IPv6
ext4: don't allow journal inode to have encrypt flag
selftests: set the BUILD variable to absolute path
btrfs: make thaw time super block check to also verify checksum
net: hns3: fix return value check bug of rx copybreak
mbcache: Avoid nesting of cache->c_list_lock under bit locks
efi: random: combine bootloader provided RNG seed with RNG protocol output
io_uring: Fix unsigned 'res' comparison with zero in io_fixup_rw_res()
drm/mgag200: Fix PLL setup for G200_SE_A rev >=4
Linux 5.15.87
Change-Id: I06fb376627506652ed60c04d56074956e6e075a0
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -434,8 +434,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
|
||||
current->mm->start_stack = current->mm->start_brk + stack_size;
|
||||
#endif
|
||||
|
||||
if (create_elf_fdpic_tables(bprm, current->mm,
|
||||
&exec_params, &interp_params) < 0)
|
||||
retval = create_elf_fdpic_tables(bprm, current->mm, &exec_params,
|
||||
&interp_params);
|
||||
if (retval < 0)
|
||||
goto error;
|
||||
|
||||
kdebug("- start_code %lx", current->mm->start_code);
|
||||
|
||||
@@ -433,6 +433,7 @@ static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
|
||||
u64 wanted_disk_byte = ref->wanted_disk_byte;
|
||||
u64 count = 0;
|
||||
u64 data_offset;
|
||||
u8 type;
|
||||
|
||||
if (level != 0) {
|
||||
eb = path->nodes[level];
|
||||
@@ -487,6 +488,9 @@ static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
|
||||
continue;
|
||||
}
|
||||
fi = btrfs_item_ptr(eb, slot, struct btrfs_file_extent_item);
|
||||
type = btrfs_file_extent_type(eb, fi);
|
||||
if (type == BTRFS_FILE_EXTENT_INLINE)
|
||||
goto next;
|
||||
disk_byte = btrfs_file_extent_disk_bytenr(eb, fi);
|
||||
data_offset = btrfs_file_extent_offset(eb, fi);
|
||||
|
||||
|
||||
@@ -202,11 +202,9 @@ static bool btrfs_supported_super_csum(u16 csum_type)
|
||||
* Return 0 if the superblock checksum type matches the checksum value of that
|
||||
* algorithm. Pass the raw disk superblock data.
|
||||
*/
|
||||
static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
||||
char *raw_disk_sb)
|
||||
int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
||||
const struct btrfs_super_block *disk_sb)
|
||||
{
|
||||
struct btrfs_super_block *disk_sb =
|
||||
(struct btrfs_super_block *)raw_disk_sb;
|
||||
char result[BTRFS_CSUM_SIZE];
|
||||
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
|
||||
|
||||
@@ -217,7 +215,7 @@ static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
||||
* BTRFS_SUPER_INFO_SIZE range, we expect that the unused space is
|
||||
* filled with zeros and is included in the checksum.
|
||||
*/
|
||||
crypto_shash_digest(shash, raw_disk_sb + BTRFS_CSUM_SIZE,
|
||||
crypto_shash_digest(shash, (const u8 *)disk_sb + BTRFS_CSUM_SIZE,
|
||||
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE, result);
|
||||
|
||||
if (memcmp(disk_sb->csum, result, fs_info->csum_size))
|
||||
@@ -2491,8 +2489,8 @@ out:
|
||||
* 1, 2 2nd and 3rd backup copy
|
||||
* -1 skip bytenr check
|
||||
*/
|
||||
static int validate_super(struct btrfs_fs_info *fs_info,
|
||||
struct btrfs_super_block *sb, int mirror_num)
|
||||
int btrfs_validate_super(struct btrfs_fs_info *fs_info,
|
||||
struct btrfs_super_block *sb, int mirror_num)
|
||||
{
|
||||
u64 nodesize = btrfs_super_nodesize(sb);
|
||||
u64 sectorsize = btrfs_super_sectorsize(sb);
|
||||
@@ -2675,7 +2673,7 @@ static int validate_super(struct btrfs_fs_info *fs_info,
|
||||
*/
|
||||
static int btrfs_validate_mount_super(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
return validate_super(fs_info, fs_info->super_copy, 0);
|
||||
return btrfs_validate_super(fs_info, fs_info->super_copy, 0);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -2689,7 +2687,7 @@ static int btrfs_validate_write_super(struct btrfs_fs_info *fs_info,
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = validate_super(fs_info, sb, -1);
|
||||
ret = btrfs_validate_super(fs_info, sb, -1);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
if (!btrfs_supported_super_csum(btrfs_super_csum_type(sb))) {
|
||||
@@ -3210,7 +3208,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
|
||||
* We want to check superblock checksum, the type is stored inside.
|
||||
* Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k).
|
||||
*/
|
||||
if (btrfs_check_super_csum(fs_info, (u8 *)disk_super)) {
|
||||
if (btrfs_check_super_csum(fs_info, disk_super)) {
|
||||
btrfs_err(fs_info, "superblock checksum mismatch");
|
||||
err = -EINVAL;
|
||||
btrfs_release_disk_super(disk_super);
|
||||
@@ -3703,7 +3701,7 @@ static void btrfs_end_super_write(struct bio *bio)
|
||||
}
|
||||
|
||||
struct btrfs_super_block *btrfs_read_dev_one_super(struct block_device *bdev,
|
||||
int copy_num)
|
||||
int copy_num, bool drop_cache)
|
||||
{
|
||||
struct btrfs_super_block *super;
|
||||
struct page *page;
|
||||
@@ -3721,6 +3719,19 @@ struct btrfs_super_block *btrfs_read_dev_one_super(struct block_device *bdev,
|
||||
if (bytenr + BTRFS_SUPER_INFO_SIZE >= i_size_read(bdev->bd_inode))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (drop_cache) {
|
||||
/* This should only be called with the primary sb. */
|
||||
ASSERT(copy_num == 0);
|
||||
|
||||
/*
|
||||
* Drop the page of the primary superblock, so later read will
|
||||
* always read from the device.
|
||||
*/
|
||||
invalidate_inode_pages2_range(mapping,
|
||||
bytenr >> PAGE_SHIFT,
|
||||
(bytenr + BTRFS_SUPER_INFO_SIZE) >> PAGE_SHIFT);
|
||||
}
|
||||
|
||||
page = read_cache_page_gfp(mapping, bytenr >> PAGE_SHIFT, GFP_NOFS);
|
||||
if (IS_ERR(page))
|
||||
return ERR_CAST(page);
|
||||
@@ -3752,7 +3763,7 @@ struct btrfs_super_block *btrfs_read_dev_super(struct block_device *bdev)
|
||||
* later supers, using BTRFS_SUPER_MIRROR_MAX instead
|
||||
*/
|
||||
for (i = 0; i < 1; i++) {
|
||||
super = btrfs_read_dev_one_super(bdev, i);
|
||||
super = btrfs_read_dev_one_super(bdev, i, false);
|
||||
if (IS_ERR(super))
|
||||
continue;
|
||||
|
||||
|
||||
@@ -52,14 +52,18 @@ struct extent_buffer *btrfs_find_create_tree_block(
|
||||
void btrfs_clean_tree_block(struct extent_buffer *buf);
|
||||
void btrfs_clear_oneshot_options(struct btrfs_fs_info *fs_info);
|
||||
int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info);
|
||||
int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
|
||||
const struct btrfs_super_block *disk_sb);
|
||||
int __cold open_ctree(struct super_block *sb,
|
||||
struct btrfs_fs_devices *fs_devices,
|
||||
char *options);
|
||||
void __cold close_ctree(struct btrfs_fs_info *fs_info);
|
||||
int btrfs_validate_super(struct btrfs_fs_info *fs_info,
|
||||
struct btrfs_super_block *sb, int mirror_num);
|
||||
int write_all_supers(struct btrfs_fs_info *fs_info, int max_mirrors);
|
||||
struct btrfs_super_block *btrfs_read_dev_super(struct block_device *bdev);
|
||||
struct btrfs_super_block *btrfs_read_dev_one_super(struct block_device *bdev,
|
||||
int copy_num);
|
||||
int copy_num, bool drop_cache);
|
||||
int btrfs_commit_super(struct btrfs_fs_info *fs_info);
|
||||
struct btrfs_root *btrfs_read_tree_root(struct btrfs_root *tree_root,
|
||||
struct btrfs_key *key);
|
||||
|
||||
@@ -3415,13 +3415,10 @@ static long btrfs_ioctl_dev_info(struct btrfs_fs_info *fs_info,
|
||||
di_args->bytes_used = btrfs_device_get_bytes_used(dev);
|
||||
di_args->total_bytes = btrfs_device_get_total_bytes(dev);
|
||||
memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));
|
||||
if (dev->name) {
|
||||
strncpy(di_args->path, rcu_str_deref(dev->name),
|
||||
sizeof(di_args->path) - 1);
|
||||
di_args->path[sizeof(di_args->path) - 1] = 0;
|
||||
} else {
|
||||
if (dev->name)
|
||||
strscpy(di_args->path, rcu_str_deref(dev->name), sizeof(di_args->path));
|
||||
else
|
||||
di_args->path[0] = '\0';
|
||||
}
|
||||
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
|
||||
@@ -18,7 +18,11 @@ static inline struct rcu_string *rcu_string_strdup(const char *src, gfp_t mask)
|
||||
(len * sizeof(char)), mask);
|
||||
if (!ret)
|
||||
return ret;
|
||||
strncpy(ret->str, src, len);
|
||||
/* Warn if the source got unexpectedly truncated. */
|
||||
if (WARN_ON(strscpy(ret->str, src, len) < 0)) {
|
||||
kfree(ret);
|
||||
return NULL;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -2497,11 +2497,87 @@ static int btrfs_freeze(struct super_block *sb)
|
||||
return btrfs_commit_transaction(trans);
|
||||
}
|
||||
|
||||
static int check_dev_super(struct btrfs_device *dev)
|
||||
{
|
||||
struct btrfs_fs_info *fs_info = dev->fs_info;
|
||||
struct btrfs_super_block *sb;
|
||||
u16 csum_type;
|
||||
int ret = 0;
|
||||
|
||||
/* This should be called with fs still frozen. */
|
||||
ASSERT(test_bit(BTRFS_FS_FROZEN, &fs_info->flags));
|
||||
|
||||
/* Missing dev, no need to check. */
|
||||
if (!dev->bdev)
|
||||
return 0;
|
||||
|
||||
/* Only need to check the primary super block. */
|
||||
sb = btrfs_read_dev_one_super(dev->bdev, 0, true);
|
||||
if (IS_ERR(sb))
|
||||
return PTR_ERR(sb);
|
||||
|
||||
/* Verify the checksum. */
|
||||
csum_type = btrfs_super_csum_type(sb);
|
||||
if (csum_type != btrfs_super_csum_type(fs_info->super_copy)) {
|
||||
btrfs_err(fs_info, "csum type changed, has %u expect %u",
|
||||
csum_type, btrfs_super_csum_type(fs_info->super_copy));
|
||||
ret = -EUCLEAN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (btrfs_check_super_csum(fs_info, sb)) {
|
||||
btrfs_err(fs_info, "csum for on-disk super block no longer matches");
|
||||
ret = -EUCLEAN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Btrfs_validate_super() includes fsid check against super->fsid. */
|
||||
ret = btrfs_validate_super(fs_info, sb, 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (btrfs_super_generation(sb) != fs_info->last_trans_committed) {
|
||||
btrfs_err(fs_info, "transid mismatch, has %llu expect %llu",
|
||||
btrfs_super_generation(sb),
|
||||
fs_info->last_trans_committed);
|
||||
ret = -EUCLEAN;
|
||||
goto out;
|
||||
}
|
||||
out:
|
||||
btrfs_release_disk_super(sb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int btrfs_unfreeze(struct super_block *sb)
|
||||
{
|
||||
struct btrfs_fs_info *fs_info = btrfs_sb(sb);
|
||||
struct btrfs_device *device;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* Make sure the fs is not changed by accident (like hibernation then
|
||||
* modified by other OS).
|
||||
* If we found anything wrong, we mark the fs error immediately.
|
||||
*
|
||||
* And since the fs is frozen, no one can modify the fs yet, thus
|
||||
* we don't need to hold device_list_mutex.
|
||||
*/
|
||||
list_for_each_entry(device, &fs_info->fs_devices->devices, dev_list) {
|
||||
ret = check_dev_super(device);
|
||||
if (ret < 0) {
|
||||
btrfs_handle_fs_error(fs_info, ret,
|
||||
"super block on devid %llu got modified unexpectedly",
|
||||
device->devid);
|
||||
break;
|
||||
}
|
||||
}
|
||||
clear_bit(BTRFS_FS_FROZEN, &fs_info->flags);
|
||||
|
||||
/*
|
||||
* We still return 0, to allow VFS layer to unfreeze the fs even the
|
||||
* above checks failed. Since the fs is either fine or read-only, we're
|
||||
* safe to continue, without causing further damage.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -39,8 +39,10 @@ int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
|
||||
goto out;
|
||||
|
||||
path = btrfs_alloc_path();
|
||||
if (!path)
|
||||
return -ENOMEM;
|
||||
if (!path) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
level = btrfs_header_level(root->node);
|
||||
|
||||
|
||||
@@ -2074,7 +2074,7 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
|
||||
struct page *page;
|
||||
int ret;
|
||||
|
||||
disk_super = btrfs_read_dev_one_super(bdev, copy_num);
|
||||
disk_super = btrfs_read_dev_one_super(bdev, copy_num, false);
|
||||
if (IS_ERR(disk_super))
|
||||
continue;
|
||||
|
||||
@@ -7043,6 +7043,27 @@ static void warn_32bit_meta_chunk(struct btrfs_fs_info *fs_info,
|
||||
}
|
||||
#endif
|
||||
|
||||
static struct btrfs_device *handle_missing_device(struct btrfs_fs_info *fs_info,
|
||||
u64 devid, u8 *uuid)
|
||||
{
|
||||
struct btrfs_device *dev;
|
||||
|
||||
if (!btrfs_test_opt(fs_info, DEGRADED)) {
|
||||
btrfs_report_missing_device(fs_info, devid, uuid, true);
|
||||
return ERR_PTR(-ENOENT);
|
||||
}
|
||||
|
||||
dev = add_missing_dev(fs_info->fs_devices, devid, uuid);
|
||||
if (IS_ERR(dev)) {
|
||||
btrfs_err(fs_info, "failed to init missing device %llu: %ld",
|
||||
devid, PTR_ERR(dev));
|
||||
return dev;
|
||||
}
|
||||
btrfs_report_missing_device(fs_info, devid, uuid, false);
|
||||
|
||||
return dev;
|
||||
}
|
||||
|
||||
static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
|
||||
struct btrfs_chunk *chunk)
|
||||
{
|
||||
@@ -7130,28 +7151,18 @@ static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
|
||||
BTRFS_UUID_SIZE);
|
||||
args.uuid = uuid;
|
||||
map->stripes[i].dev = btrfs_find_device(fs_info->fs_devices, &args);
|
||||
if (!map->stripes[i].dev &&
|
||||
!btrfs_test_opt(fs_info, DEGRADED)) {
|
||||
free_extent_map(em);
|
||||
btrfs_report_missing_device(fs_info, devid, uuid, true);
|
||||
return -ENOENT;
|
||||
}
|
||||
if (!map->stripes[i].dev) {
|
||||
map->stripes[i].dev =
|
||||
add_missing_dev(fs_info->fs_devices, devid,
|
||||
uuid);
|
||||
map->stripes[i].dev = handle_missing_device(fs_info,
|
||||
devid, uuid);
|
||||
if (IS_ERR(map->stripes[i].dev)) {
|
||||
ret = PTR_ERR(map->stripes[i].dev);
|
||||
free_extent_map(em);
|
||||
btrfs_err(fs_info,
|
||||
"failed to init missing dev %llu: %ld",
|
||||
devid, PTR_ERR(map->stripes[i].dev));
|
||||
return PTR_ERR(map->stripes[i].dev);
|
||||
return ret;
|
||||
}
|
||||
btrfs_report_missing_device(fs_info, devid, uuid, false);
|
||||
}
|
||||
|
||||
set_bit(BTRFS_DEV_STATE_IN_FS_METADATA,
|
||||
&(map->stripes[i].dev->dev_state));
|
||||
|
||||
}
|
||||
|
||||
write_lock(&map_tree->lock);
|
||||
|
||||
@@ -2872,7 +2872,7 @@ int ceph_get_caps(struct file *filp, int need, int want, loff_t endoff, int *got
|
||||
|
||||
while (true) {
|
||||
flags &= CEPH_FILE_MODE_MASK;
|
||||
if (atomic_read(&fi->num_locks))
|
||||
if (vfs_inode_has_locks(inode))
|
||||
flags |= CHECK_FILELOCK;
|
||||
_got = 0;
|
||||
ret = try_get_cap_refs(inode, need, want, endoff,
|
||||
|
||||
@@ -32,18 +32,14 @@ void __init ceph_flock_init(void)
|
||||
|
||||
static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
|
||||
{
|
||||
struct ceph_file_info *fi = dst->fl_file->private_data;
|
||||
struct inode *inode = file_inode(dst->fl_file);
|
||||
atomic_inc(&ceph_inode(inode)->i_filelock_ref);
|
||||
atomic_inc(&fi->num_locks);
|
||||
}
|
||||
|
||||
static void ceph_fl_release_lock(struct file_lock *fl)
|
||||
{
|
||||
struct ceph_file_info *fi = fl->fl_file->private_data;
|
||||
struct inode *inode = file_inode(fl->fl_file);
|
||||
struct ceph_inode_info *ci = ceph_inode(inode);
|
||||
atomic_dec(&fi->num_locks);
|
||||
if (atomic_dec_and_test(&ci->i_filelock_ref)) {
|
||||
/* clear error when all locks are released */
|
||||
spin_lock(&ci->i_ceph_lock);
|
||||
|
||||
@@ -773,7 +773,6 @@ struct ceph_file_info {
|
||||
struct list_head rw_contexts;
|
||||
|
||||
u32 filp_gen;
|
||||
atomic_t num_locks;
|
||||
};
|
||||
|
||||
struct ceph_dir_file_info {
|
||||
|
||||
@@ -656,9 +656,15 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
|
||||
seq_printf(s, ",echo_interval=%lu",
|
||||
tcon->ses->server->echo_interval / HZ);
|
||||
|
||||
/* Only display max_credits if it was overridden on mount */
|
||||
/* Only display the following if overridden on mount */
|
||||
if (tcon->ses->server->max_credits != SMB2_MAX_CREDITS_AVAILABLE)
|
||||
seq_printf(s, ",max_credits=%u", tcon->ses->server->max_credits);
|
||||
if (tcon->ses->server->tcp_nodelay)
|
||||
seq_puts(s, ",tcpnodelay");
|
||||
if (tcon->ses->server->noautotune)
|
||||
seq_puts(s, ",noautotune");
|
||||
if (tcon->ses->server->noblocksnd)
|
||||
seq_puts(s, ",noblocksend");
|
||||
|
||||
if (tcon->snapshot_time)
|
||||
seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
#include <linux/in6.h>
|
||||
#include <linux/inet.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/mempool.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include "cifs_fs_sb.h"
|
||||
@@ -21,6 +23,7 @@
|
||||
#include <linux/scatterlist.h>
|
||||
#include <uapi/linux/cifs/cifs_mount.h>
|
||||
#include "smb2pdu.h"
|
||||
#include "smb2glob.h"
|
||||
|
||||
#define CIFS_MAGIC_NUMBER 0xFF534D42 /* the first four bytes of SMB PDUs */
|
||||
|
||||
@@ -1972,4 +1975,70 @@ static inline bool cifs_is_referral_server(struct cifs_tcon *tcon,
|
||||
return is_tcon_dfs(tcon) || (ref && (ref->flags & DFSREF_REFERRAL_SERVER));
|
||||
}
|
||||
|
||||
static inline unsigned int cifs_get_num_sgs(const struct smb_rqst *rqst,
|
||||
int num_rqst,
|
||||
const u8 *sig)
|
||||
{
|
||||
unsigned int len, skip;
|
||||
unsigned int nents = 0;
|
||||
unsigned long addr;
|
||||
int i, j;
|
||||
|
||||
/* Assumes the first rqst has a transform header as the first iov.
|
||||
* I.e.
|
||||
* rqst[0].rq_iov[0] is transform header
|
||||
* rqst[0].rq_iov[1+] data to be encrypted/decrypted
|
||||
* rqst[1+].rq_iov[0+] data to be encrypted/decrypted
|
||||
*/
|
||||
for (i = 0; i < num_rqst; i++) {
|
||||
/*
|
||||
* The first rqst has a transform header where the
|
||||
* first 20 bytes are not part of the encrypted blob.
|
||||
*/
|
||||
for (j = 0; j < rqst[i].rq_nvec; j++) {
|
||||
struct kvec *iov = &rqst[i].rq_iov[j];
|
||||
|
||||
skip = (i == 0) && (j == 0) ? 20 : 0;
|
||||
addr = (unsigned long)iov->iov_base + skip;
|
||||
if (unlikely(is_vmalloc_addr((void *)addr))) {
|
||||
len = iov->iov_len - skip;
|
||||
nents += DIV_ROUND_UP(offset_in_page(addr) + len,
|
||||
PAGE_SIZE);
|
||||
} else {
|
||||
nents++;
|
||||
}
|
||||
}
|
||||
nents += rqst[i].rq_npages;
|
||||
}
|
||||
nents += DIV_ROUND_UP(offset_in_page(sig) + SMB2_SIGNATURE_SIZE, PAGE_SIZE);
|
||||
return nents;
|
||||
}
|
||||
|
||||
/* We can not use the normal sg_set_buf() as we will sometimes pass a
|
||||
* stack object as buf.
|
||||
*/
|
||||
static inline struct scatterlist *cifs_sg_set_buf(struct scatterlist *sg,
|
||||
const void *buf,
|
||||
unsigned int buflen)
|
||||
{
|
||||
unsigned long addr = (unsigned long)buf;
|
||||
unsigned int off = offset_in_page(addr);
|
||||
|
||||
addr &= PAGE_MASK;
|
||||
if (unlikely(is_vmalloc_addr((void *)addr))) {
|
||||
do {
|
||||
unsigned int len = min_t(unsigned int, buflen, PAGE_SIZE - off);
|
||||
|
||||
sg_set_page(sg++, vmalloc_to_page((void *)addr), len, off);
|
||||
|
||||
off = 0;
|
||||
addr += PAGE_SIZE;
|
||||
buflen -= len;
|
||||
} while (buflen);
|
||||
} else {
|
||||
sg_set_page(sg++, virt_to_page(addr), buflen, off);
|
||||
}
|
||||
return sg;
|
||||
}
|
||||
|
||||
#endif /* _CIFS_GLOB_H */
|
||||
|
||||
@@ -590,8 +590,8 @@ int cifs_alloc_hash(const char *name, struct crypto_shash **shash,
|
||||
struct sdesc **sdesc);
|
||||
void cifs_free_hash(struct crypto_shash **shash, struct sdesc **sdesc);
|
||||
|
||||
extern void rqst_page_get_length(struct smb_rqst *rqst, unsigned int page,
|
||||
unsigned int *len, unsigned int *offset);
|
||||
void rqst_page_get_length(const struct smb_rqst *rqst, unsigned int page,
|
||||
unsigned int *len, unsigned int *offset);
|
||||
struct cifs_chan *
|
||||
cifs_ses_find_chan(struct cifs_ses *ses, struct TCP_Server_Info *server);
|
||||
int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses);
|
||||
|
||||
@@ -1948,7 +1948,7 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx __attribute__((unused)),
|
||||
struct cifs_ses *
|
||||
cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
|
||||
{
|
||||
int rc = -ENOMEM;
|
||||
int rc = 0;
|
||||
unsigned int xid;
|
||||
struct cifs_ses *ses;
|
||||
struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
|
||||
@@ -1990,6 +1990,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
|
||||
return ses;
|
||||
}
|
||||
|
||||
rc = -ENOMEM;
|
||||
|
||||
cifs_dbg(FYI, "Existing smb sess not found\n");
|
||||
ses = sesInfoAlloc();
|
||||
if (ses == NULL)
|
||||
|
||||
@@ -1134,8 +1134,8 @@ cifs_free_hash(struct crypto_shash **shash, struct sdesc **sdesc)
|
||||
* @len: Where to store the length for this page:
|
||||
* @offset: Where to store the offset for this page
|
||||
*/
|
||||
void rqst_page_get_length(struct smb_rqst *rqst, unsigned int page,
|
||||
unsigned int *len, unsigned int *offset)
|
||||
void rqst_page_get_length(const struct smb_rqst *rqst, unsigned int page,
|
||||
unsigned int *len, unsigned int *offset)
|
||||
{
|
||||
*len = rqst->rq_pagesz;
|
||||
*offset = (page == 0) ? rqst->rq_offset : 0;
|
||||
|
||||
@@ -4416,69 +4416,82 @@ fill_transform_hdr(struct smb2_transform_hdr *tr_hdr, unsigned int orig_len,
|
||||
memcpy(&tr_hdr->SessionId, &shdr->SessionId, 8);
|
||||
}
|
||||
|
||||
/* We can not use the normal sg_set_buf() as we will sometimes pass a
|
||||
* stack object as buf.
|
||||
*/
|
||||
static inline void smb2_sg_set_buf(struct scatterlist *sg, const void *buf,
|
||||
unsigned int buflen)
|
||||
static void *smb2_aead_req_alloc(struct crypto_aead *tfm, const struct smb_rqst *rqst,
|
||||
int num_rqst, const u8 *sig, u8 **iv,
|
||||
struct aead_request **req, struct scatterlist **sgl,
|
||||
unsigned int *num_sgs)
|
||||
{
|
||||
void *addr;
|
||||
/*
|
||||
* VMAP_STACK (at least) puts stack into the vmalloc address space
|
||||
*/
|
||||
if (is_vmalloc_addr(buf))
|
||||
addr = vmalloc_to_page(buf);
|
||||
else
|
||||
addr = virt_to_page(buf);
|
||||
sg_set_page(sg, addr, buflen, offset_in_page(buf));
|
||||
}
|
||||
unsigned int req_size = sizeof(**req) + crypto_aead_reqsize(tfm);
|
||||
unsigned int iv_size = crypto_aead_ivsize(tfm);
|
||||
unsigned int len;
|
||||
u8 *p;
|
||||
|
||||
/* Assumes the first rqst has a transform header as the first iov.
|
||||
* I.e.
|
||||
* rqst[0].rq_iov[0] is transform header
|
||||
* rqst[0].rq_iov[1+] data to be encrypted/decrypted
|
||||
* rqst[1+].rq_iov[0+] data to be encrypted/decrypted
|
||||
*/
|
||||
static struct scatterlist *
|
||||
init_sg(int num_rqst, struct smb_rqst *rqst, u8 *sign)
|
||||
{
|
||||
unsigned int sg_len;
|
||||
struct scatterlist *sg;
|
||||
unsigned int i;
|
||||
unsigned int j;
|
||||
unsigned int idx = 0;
|
||||
int skip;
|
||||
*num_sgs = cifs_get_num_sgs(rqst, num_rqst, sig);
|
||||
|
||||
sg_len = 1;
|
||||
for (i = 0; i < num_rqst; i++)
|
||||
sg_len += rqst[i].rq_nvec + rqst[i].rq_npages;
|
||||
len = iv_size;
|
||||
len += crypto_aead_alignmask(tfm) & ~(crypto_tfm_ctx_alignment() - 1);
|
||||
len = ALIGN(len, crypto_tfm_ctx_alignment());
|
||||
len += req_size;
|
||||
len = ALIGN(len, __alignof__(struct scatterlist));
|
||||
len += *num_sgs * sizeof(**sgl);
|
||||
|
||||
sg = kmalloc_array(sg_len, sizeof(struct scatterlist), GFP_KERNEL);
|
||||
if (!sg)
|
||||
p = kmalloc(len, GFP_ATOMIC);
|
||||
if (!p)
|
||||
return NULL;
|
||||
|
||||
sg_init_table(sg, sg_len);
|
||||
*iv = (u8 *)PTR_ALIGN(p, crypto_aead_alignmask(tfm) + 1);
|
||||
*req = (struct aead_request *)PTR_ALIGN(*iv + iv_size,
|
||||
crypto_tfm_ctx_alignment());
|
||||
*sgl = (struct scatterlist *)PTR_ALIGN((u8 *)*req + req_size,
|
||||
__alignof__(struct scatterlist));
|
||||
return p;
|
||||
}
|
||||
|
||||
static void *smb2_get_aead_req(struct crypto_aead *tfm, const struct smb_rqst *rqst,
|
||||
int num_rqst, const u8 *sig, u8 **iv,
|
||||
struct aead_request **req, struct scatterlist **sgl)
|
||||
{
|
||||
unsigned int off, len, skip;
|
||||
struct scatterlist *sg;
|
||||
unsigned int num_sgs;
|
||||
unsigned long addr;
|
||||
int i, j;
|
||||
void *p;
|
||||
|
||||
p = smb2_aead_req_alloc(tfm, rqst, num_rqst, sig, iv, req, sgl, &num_sgs);
|
||||
if (!p)
|
||||
return NULL;
|
||||
|
||||
sg_init_table(*sgl, num_sgs);
|
||||
sg = *sgl;
|
||||
|
||||
/* Assumes the first rqst has a transform header as the first iov.
|
||||
* I.e.
|
||||
* rqst[0].rq_iov[0] is transform header
|
||||
* rqst[0].rq_iov[1+] data to be encrypted/decrypted
|
||||
* rqst[1+].rq_iov[0+] data to be encrypted/decrypted
|
||||
*/
|
||||
for (i = 0; i < num_rqst; i++) {
|
||||
/*
|
||||
* The first rqst has a transform header where the
|
||||
* first 20 bytes are not part of the encrypted blob.
|
||||
*/
|
||||
for (j = 0; j < rqst[i].rq_nvec; j++) {
|
||||
/*
|
||||
* The first rqst has a transform header where the
|
||||
* first 20 bytes are not part of the encrypted blob
|
||||
*/
|
||||
struct kvec *iov = &rqst[i].rq_iov[j];
|
||||
|
||||
skip = (i == 0) && (j == 0) ? 20 : 0;
|
||||
smb2_sg_set_buf(&sg[idx++],
|
||||
rqst[i].rq_iov[j].iov_base + skip,
|
||||
rqst[i].rq_iov[j].iov_len - skip);
|
||||
}
|
||||
|
||||
addr = (unsigned long)iov->iov_base + skip;
|
||||
len = iov->iov_len - skip;
|
||||
sg = cifs_sg_set_buf(sg, (void *)addr, len);
|
||||
}
|
||||
for (j = 0; j < rqst[i].rq_npages; j++) {
|
||||
unsigned int len, offset;
|
||||
|
||||
rqst_page_get_length(&rqst[i], j, &len, &offset);
|
||||
sg_set_page(&sg[idx++], rqst[i].rq_pages[j], len, offset);
|
||||
rqst_page_get_length(&rqst[i], j, &len, &off);
|
||||
sg_set_page(sg++, rqst[i].rq_pages[j], len, off);
|
||||
}
|
||||
}
|
||||
smb2_sg_set_buf(&sg[idx], sign, SMB2_SIGNATURE_SIZE);
|
||||
return sg;
|
||||
cifs_sg_set_buf(sg, sig, SMB2_SIGNATURE_SIZE);
|
||||
|
||||
return p;
|
||||
}
|
||||
|
||||
static int
|
||||
@@ -4522,11 +4535,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
u8 sign[SMB2_SIGNATURE_SIZE] = {};
|
||||
u8 key[SMB3_ENC_DEC_KEY_SIZE];
|
||||
struct aead_request *req;
|
||||
char *iv;
|
||||
unsigned int iv_len;
|
||||
u8 *iv;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
struct crypto_aead *tfm;
|
||||
unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
|
||||
void *creq;
|
||||
|
||||
rc = smb2_get_enc_key(server, tr_hdr->SessionId, enc, key);
|
||||
if (rc) {
|
||||
@@ -4561,32 +4574,15 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
return rc;
|
||||
}
|
||||
|
||||
req = aead_request_alloc(tfm, GFP_KERNEL);
|
||||
if (!req) {
|
||||
cifs_server_dbg(VFS, "%s: Failed to alloc aead request\n", __func__);
|
||||
creq = smb2_get_aead_req(tfm, rqst, num_rqst, sign, &iv, &req, &sg);
|
||||
if (unlikely(!creq))
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (!enc) {
|
||||
memcpy(sign, &tr_hdr->Signature, SMB2_SIGNATURE_SIZE);
|
||||
crypt_len += SMB2_SIGNATURE_SIZE;
|
||||
}
|
||||
|
||||
sg = init_sg(num_rqst, rqst, sign);
|
||||
if (!sg) {
|
||||
cifs_server_dbg(VFS, "%s: Failed to init sg\n", __func__);
|
||||
rc = -ENOMEM;
|
||||
goto free_req;
|
||||
}
|
||||
|
||||
iv_len = crypto_aead_ivsize(tfm);
|
||||
iv = kzalloc(iv_len, GFP_KERNEL);
|
||||
if (!iv) {
|
||||
cifs_server_dbg(VFS, "%s: Failed to alloc iv\n", __func__);
|
||||
rc = -ENOMEM;
|
||||
goto free_sg;
|
||||
}
|
||||
|
||||
if ((server->cipher_type == SMB2_ENCRYPTION_AES128_GCM) ||
|
||||
(server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))
|
||||
memcpy(iv, (char *)tr_hdr->Nonce, SMB3_AES_GCM_NONCE);
|
||||
@@ -4595,6 +4591,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
memcpy(iv + 1, (char *)tr_hdr->Nonce, SMB3_AES_CCM_NONCE);
|
||||
}
|
||||
|
||||
aead_request_set_tfm(req, tfm);
|
||||
aead_request_set_crypt(req, sg, sg, crypt_len, iv);
|
||||
aead_request_set_ad(req, assoc_data_len);
|
||||
|
||||
@@ -4607,11 +4604,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
if (!rc && enc)
|
||||
memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
|
||||
|
||||
kfree(iv);
|
||||
free_sg:
|
||||
kfree(sg);
|
||||
free_req:
|
||||
kfree(req);
|
||||
kfree_sensitive(creq);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
||||
@@ -1520,7 +1520,11 @@ static void process_recv_sockets(struct work_struct *work)
|
||||
|
||||
static void process_listen_recv_socket(struct work_struct *work)
|
||||
{
|
||||
accept_from_sock(&listen_con);
|
||||
int ret;
|
||||
|
||||
do {
|
||||
ret = accept_from_sock(&listen_con);
|
||||
} while (!ret);
|
||||
}
|
||||
|
||||
static void dlm_connect(struct connection *con)
|
||||
@@ -1797,7 +1801,7 @@ static int dlm_listen_for_all(void)
|
||||
result = sock->ops->listen(sock, 5);
|
||||
if (result < 0) {
|
||||
dlm_close_sock(&listen_con.sock);
|
||||
goto out;
|
||||
return result;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -2000,7 +2004,6 @@ fail_listen:
|
||||
dlm_proto_ops = NULL;
|
||||
fail_proto_ops:
|
||||
dlm_allow_conn = 0;
|
||||
dlm_close_sock(&listen_con.sock);
|
||||
work_stop();
|
||||
fail_local:
|
||||
deinit_local();
|
||||
|
||||
@@ -665,7 +665,7 @@ int ext4_should_retry_alloc(struct super_block *sb, int *retries)
|
||||
* it's possible we've just missed a transaction commit here,
|
||||
* so ignore the returned status
|
||||
*/
|
||||
jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
|
||||
ext4_debug("%s: retrying operation after ENOSPC\n", sb->s_id);
|
||||
(void) jbd2_journal_force_commit_nested(sbi->s_journal);
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -559,7 +559,7 @@ enum {
|
||||
*
|
||||
* It's not paranoia if the Murphy's Law really *is* out to get you. :-)
|
||||
*/
|
||||
#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1 << EXT4_INODE_##FLAG))
|
||||
#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1U << EXT4_INODE_##FLAG))
|
||||
#define CHECK_FLAG_VALUE(FLAG) BUILD_BUG_ON(!TEST_FLAG_VALUE(FLAG))
|
||||
|
||||
static inline void ext4_check_flag_values(void)
|
||||
@@ -2996,7 +2996,8 @@ int do_journal_get_write_access(handle_t *handle, struct inode *inode,
|
||||
typedef enum {
|
||||
EXT4_IGET_NORMAL = 0,
|
||||
EXT4_IGET_SPECIAL = 0x0001, /* OK to iget a system inode */
|
||||
EXT4_IGET_HANDLE = 0x0002 /* Inode # is from a handle */
|
||||
EXT4_IGET_HANDLE = 0x0002, /* Inode # is from a handle */
|
||||
EXT4_IGET_BAD = 0x0004 /* Allow to iget a bad inode */
|
||||
} ext4_iget_flags;
|
||||
|
||||
extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
|
||||
@@ -3646,8 +3647,8 @@ extern void ext4_initialize_dirent_tail(struct buffer_head *bh,
|
||||
unsigned int blocksize);
|
||||
extern int ext4_handle_dirty_dirblock(handle_t *handle, struct inode *inode,
|
||||
struct buffer_head *bh);
|
||||
extern int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name,
|
||||
struct inode *inode);
|
||||
extern int __ext4_unlink(struct inode *dir, const struct qstr *d_name,
|
||||
struct inode *inode, struct dentry *dentry);
|
||||
extern int __ext4_link(struct inode *dir, struct inode *inode,
|
||||
struct dentry *dentry);
|
||||
|
||||
|
||||
@@ -267,8 +267,7 @@ int __ext4_forget(const char *where, unsigned int line, handle_t *handle,
|
||||
trace_ext4_forget(inode, is_metadata, blocknr);
|
||||
BUFFER_TRACE(bh, "enter");
|
||||
|
||||
jbd_debug(4, "forgetting bh %p: is_metadata = %d, mode %o, "
|
||||
"data mode %x\n",
|
||||
ext4_debug("forgetting bh %p: is_metadata=%d, mode %o, data mode %x\n",
|
||||
bh, is_metadata, inode->i_mode,
|
||||
test_opt(inode->i_sb, DATA_FLAGS));
|
||||
|
||||
|
||||
@@ -5810,6 +5810,14 @@ int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
|
||||
struct ext4_extent *extent;
|
||||
ext4_lblk_t first_lblk, first_lclu, last_lclu;
|
||||
|
||||
/*
|
||||
* if data can be stored inline, the logical cluster isn't
|
||||
* mapped - no physical clusters have been allocated, and the
|
||||
* file has no extents
|
||||
*/
|
||||
if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
|
||||
return 0;
|
||||
|
||||
/* search for the extent closest to the first block in the cluster */
|
||||
path = ext4_find_extent(inode, EXT4_C2B(sbi, lclu), NULL, 0);
|
||||
if (IS_ERR(path)) {
|
||||
|
||||
@@ -1372,7 +1372,7 @@ retry:
|
||||
if (count_reserved)
|
||||
count_rsvd(inode, lblk, orig_es.es_len - len1 - len2,
|
||||
&orig_es, &rc);
|
||||
goto out;
|
||||
goto out_get_reserved;
|
||||
}
|
||||
|
||||
if (len1 > 0) {
|
||||
@@ -1414,6 +1414,7 @@ retry:
|
||||
}
|
||||
}
|
||||
|
||||
out_get_reserved:
|
||||
if (count_reserved)
|
||||
*reserved = get_rsvd(inode, end, es, &rc);
|
||||
out:
|
||||
|
||||
@@ -399,25 +399,34 @@ static int __track_dentry_update(struct inode *inode, void *arg, bool update)
|
||||
struct __track_dentry_update_args *dentry_update =
|
||||
(struct __track_dentry_update_args *)arg;
|
||||
struct dentry *dentry = dentry_update->dentry;
|
||||
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
|
||||
struct inode *dir = dentry->d_parent->d_inode;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||
|
||||
mutex_unlock(&ei->i_fc_lock);
|
||||
|
||||
if (IS_ENCRYPTED(dir)) {
|
||||
ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_ENCRYPTED_FILENAME,
|
||||
NULL);
|
||||
mutex_lock(&ei->i_fc_lock);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
|
||||
if (!node) {
|
||||
ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM, NULL);
|
||||
ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
|
||||
mutex_lock(&ei->i_fc_lock);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
node->fcd_op = dentry_update->op;
|
||||
node->fcd_parent = dentry->d_parent->d_inode->i_ino;
|
||||
node->fcd_parent = dir->i_ino;
|
||||
node->fcd_ino = inode->i_ino;
|
||||
if (dentry->d_name.len > DNAME_INLINE_LEN) {
|
||||
node->fcd_name.name = kmalloc(dentry->d_name.len, GFP_NOFS);
|
||||
if (!node->fcd_name.name) {
|
||||
kmem_cache_free(ext4_fc_dentry_cachep, node);
|
||||
ext4_fc_mark_ineligible(inode->i_sb,
|
||||
EXT4_FC_REASON_NOMEM, NULL);
|
||||
ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_NOMEM, NULL);
|
||||
mutex_lock(&ei->i_fc_lock);
|
||||
return -ENOMEM;
|
||||
}
|
||||
@@ -595,6 +604,15 @@ static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail)
|
||||
|
||||
/* Ext4 commit path routines */
|
||||
|
||||
/* memcpy to fc reserved space and update CRC */
|
||||
static void *ext4_fc_memcpy(struct super_block *sb, void *dst, const void *src,
|
||||
int len, u32 *crc)
|
||||
{
|
||||
if (crc)
|
||||
*crc = ext4_chksum(EXT4_SB(sb), *crc, src, len);
|
||||
return memcpy(dst, src, len);
|
||||
}
|
||||
|
||||
/* memzero and update CRC */
|
||||
static void *ext4_fc_memzero(struct super_block *sb, void *dst, int len,
|
||||
u32 *crc)
|
||||
@@ -620,62 +638,59 @@ static void *ext4_fc_memzero(struct super_block *sb, void *dst, int len,
|
||||
*/
|
||||
static u8 *ext4_fc_reserve_space(struct super_block *sb, int len, u32 *crc)
|
||||
{
|
||||
struct ext4_fc_tl *tl;
|
||||
struct ext4_fc_tl tl;
|
||||
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||
struct buffer_head *bh;
|
||||
int bsize = sbi->s_journal->j_blocksize;
|
||||
int ret, off = sbi->s_fc_bytes % bsize;
|
||||
int pad_len;
|
||||
int remaining;
|
||||
u8 *dst;
|
||||
|
||||
/*
|
||||
* After allocating len, we should have space at least for a 0 byte
|
||||
* padding.
|
||||
* If 'len' is too long to fit in any block alongside a PAD tlv, then we
|
||||
* cannot fulfill the request.
|
||||
*/
|
||||
if (len + sizeof(struct ext4_fc_tl) > bsize)
|
||||
if (len > bsize - EXT4_FC_TAG_BASE_LEN)
|
||||
return NULL;
|
||||
|
||||
if (bsize - off - 1 > len + sizeof(struct ext4_fc_tl)) {
|
||||
/*
|
||||
* Only allocate from current buffer if we have enough space for
|
||||
* this request AND we have space to add a zero byte padding.
|
||||
*/
|
||||
if (!sbi->s_fc_bh) {
|
||||
ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
|
||||
if (ret)
|
||||
return NULL;
|
||||
sbi->s_fc_bh = bh;
|
||||
}
|
||||
sbi->s_fc_bytes += len;
|
||||
return sbi->s_fc_bh->b_data + off;
|
||||
if (!sbi->s_fc_bh) {
|
||||
ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
|
||||
if (ret)
|
||||
return NULL;
|
||||
sbi->s_fc_bh = bh;
|
||||
}
|
||||
/* Need to add PAD tag */
|
||||
tl = (struct ext4_fc_tl *)(sbi->s_fc_bh->b_data + off);
|
||||
tl->fc_tag = cpu_to_le16(EXT4_FC_TAG_PAD);
|
||||
pad_len = bsize - off - 1 - sizeof(struct ext4_fc_tl);
|
||||
tl->fc_len = cpu_to_le16(pad_len);
|
||||
if (crc)
|
||||
*crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl));
|
||||
if (pad_len > 0)
|
||||
ext4_fc_memzero(sb, tl + 1, pad_len, crc);
|
||||
dst = sbi->s_fc_bh->b_data + off;
|
||||
|
||||
/*
|
||||
* Allocate the bytes in the current block if we can do so while still
|
||||
* leaving enough space for a PAD tlv.
|
||||
*/
|
||||
remaining = bsize - EXT4_FC_TAG_BASE_LEN - off;
|
||||
if (len <= remaining) {
|
||||
sbi->s_fc_bytes += len;
|
||||
return dst;
|
||||
}
|
||||
|
||||
/*
|
||||
* Else, terminate the current block with a PAD tlv, then allocate a new
|
||||
* block and allocate the bytes at the start of that new block.
|
||||
*/
|
||||
|
||||
tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_PAD);
|
||||
tl.fc_len = cpu_to_le16(remaining);
|
||||
ext4_fc_memcpy(sb, dst, &tl, EXT4_FC_TAG_BASE_LEN, crc);
|
||||
ext4_fc_memzero(sb, dst + EXT4_FC_TAG_BASE_LEN, remaining, crc);
|
||||
|
||||
ext4_fc_submit_bh(sb, false);
|
||||
|
||||
ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
|
||||
if (ret)
|
||||
return NULL;
|
||||
sbi->s_fc_bh = bh;
|
||||
sbi->s_fc_bytes = (sbi->s_fc_bytes / bsize + 1) * bsize + len;
|
||||
sbi->s_fc_bytes += bsize - off + len;
|
||||
return sbi->s_fc_bh->b_data;
|
||||
}
|
||||
|
||||
/* memcpy to fc reserved space and update CRC */
|
||||
static void *ext4_fc_memcpy(struct super_block *sb, void *dst, const void *src,
|
||||
int len, u32 *crc)
|
||||
{
|
||||
if (crc)
|
||||
*crc = ext4_chksum(EXT4_SB(sb), *crc, src, len);
|
||||
return memcpy(dst, src, len);
|
||||
}
|
||||
|
||||
/*
|
||||
* Complete a fast commit by writing tail tag.
|
||||
*
|
||||
@@ -696,23 +711,25 @@ static int ext4_fc_write_tail(struct super_block *sb, u32 crc)
|
||||
* ext4_fc_reserve_space takes care of allocating an extra block if
|
||||
* there's no enough space on this block for accommodating this tail.
|
||||
*/
|
||||
dst = ext4_fc_reserve_space(sb, sizeof(tl) + sizeof(tail), &crc);
|
||||
dst = ext4_fc_reserve_space(sb, EXT4_FC_TAG_BASE_LEN + sizeof(tail), &crc);
|
||||
if (!dst)
|
||||
return -ENOSPC;
|
||||
|
||||
off = sbi->s_fc_bytes % bsize;
|
||||
|
||||
tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_TAIL);
|
||||
tl.fc_len = cpu_to_le16(bsize - off - 1 + sizeof(struct ext4_fc_tail));
|
||||
tl.fc_len = cpu_to_le16(bsize - off + sizeof(struct ext4_fc_tail));
|
||||
sbi->s_fc_bytes = round_up(sbi->s_fc_bytes, bsize);
|
||||
|
||||
ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), &crc);
|
||||
dst += sizeof(tl);
|
||||
ext4_fc_memcpy(sb, dst, &tl, EXT4_FC_TAG_BASE_LEN, &crc);
|
||||
dst += EXT4_FC_TAG_BASE_LEN;
|
||||
tail.fc_tid = cpu_to_le32(sbi->s_journal->j_running_transaction->t_tid);
|
||||
ext4_fc_memcpy(sb, dst, &tail.fc_tid, sizeof(tail.fc_tid), &crc);
|
||||
dst += sizeof(tail.fc_tid);
|
||||
tail.fc_crc = cpu_to_le32(crc);
|
||||
ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL);
|
||||
dst += sizeof(tail.fc_crc);
|
||||
memset(dst, 0, bsize - off); /* Don't leak uninitialized memory. */
|
||||
|
||||
ext4_fc_submit_bh(sb, true);
|
||||
|
||||
@@ -729,15 +746,15 @@ static bool ext4_fc_add_tlv(struct super_block *sb, u16 tag, u16 len, u8 *val,
|
||||
struct ext4_fc_tl tl;
|
||||
u8 *dst;
|
||||
|
||||
dst = ext4_fc_reserve_space(sb, sizeof(tl) + len, crc);
|
||||
dst = ext4_fc_reserve_space(sb, EXT4_FC_TAG_BASE_LEN + len, crc);
|
||||
if (!dst)
|
||||
return false;
|
||||
|
||||
tl.fc_tag = cpu_to_le16(tag);
|
||||
tl.fc_len = cpu_to_le16(len);
|
||||
|
||||
ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), crc);
|
||||
ext4_fc_memcpy(sb, dst + sizeof(tl), val, len, crc);
|
||||
ext4_fc_memcpy(sb, dst, &tl, EXT4_FC_TAG_BASE_LEN, crc);
|
||||
ext4_fc_memcpy(sb, dst + EXT4_FC_TAG_BASE_LEN, val, len, crc);
|
||||
|
||||
return true;
|
||||
}
|
||||
@@ -749,8 +766,8 @@ static bool ext4_fc_add_dentry_tlv(struct super_block *sb, u32 *crc,
|
||||
struct ext4_fc_dentry_info fcd;
|
||||
struct ext4_fc_tl tl;
|
||||
int dlen = fc_dentry->fcd_name.len;
|
||||
u8 *dst = ext4_fc_reserve_space(sb, sizeof(tl) + sizeof(fcd) + dlen,
|
||||
crc);
|
||||
u8 *dst = ext4_fc_reserve_space(sb,
|
||||
EXT4_FC_TAG_BASE_LEN + sizeof(fcd) + dlen, crc);
|
||||
|
||||
if (!dst)
|
||||
return false;
|
||||
@@ -759,8 +776,8 @@ static bool ext4_fc_add_dentry_tlv(struct super_block *sb, u32 *crc,
|
||||
fcd.fc_ino = cpu_to_le32(fc_dentry->fcd_ino);
|
||||
tl.fc_tag = cpu_to_le16(fc_dentry->fcd_op);
|
||||
tl.fc_len = cpu_to_le16(sizeof(fcd) + dlen);
|
||||
ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), crc);
|
||||
dst += sizeof(tl);
|
||||
ext4_fc_memcpy(sb, dst, &tl, EXT4_FC_TAG_BASE_LEN, crc);
|
||||
dst += EXT4_FC_TAG_BASE_LEN;
|
||||
ext4_fc_memcpy(sb, dst, &fcd, sizeof(fcd), crc);
|
||||
dst += sizeof(fcd);
|
||||
ext4_fc_memcpy(sb, dst, fc_dentry->fcd_name.name, dlen, crc);
|
||||
@@ -796,13 +813,13 @@ static int ext4_fc_write_inode(struct inode *inode, u32 *crc)
|
||||
|
||||
ret = -ECANCELED;
|
||||
dst = ext4_fc_reserve_space(inode->i_sb,
|
||||
sizeof(tl) + inode_len + sizeof(fc_inode.fc_ino), crc);
|
||||
EXT4_FC_TAG_BASE_LEN + inode_len + sizeof(fc_inode.fc_ino), crc);
|
||||
if (!dst)
|
||||
goto err;
|
||||
|
||||
if (!ext4_fc_memcpy(inode->i_sb, dst, &tl, sizeof(tl), crc))
|
||||
if (!ext4_fc_memcpy(inode->i_sb, dst, &tl, EXT4_FC_TAG_BASE_LEN, crc))
|
||||
goto err;
|
||||
dst += sizeof(tl);
|
||||
dst += EXT4_FC_TAG_BASE_LEN;
|
||||
if (!ext4_fc_memcpy(inode->i_sb, dst, &fc_inode, sizeof(fc_inode), crc))
|
||||
goto err;
|
||||
dst += sizeof(fc_inode);
|
||||
@@ -840,8 +857,8 @@ static int ext4_fc_write_inode_data(struct inode *inode, u32 *crc)
|
||||
mutex_unlock(&ei->i_fc_lock);
|
||||
|
||||
cur_lblk_off = old_blk_size;
|
||||
jbd_debug(1, "%s: will try writing %d to %d for inode %ld\n",
|
||||
__func__, cur_lblk_off, new_blk_size, inode->i_ino);
|
||||
ext4_debug("will try writing %d to %d for inode %ld\n",
|
||||
cur_lblk_off, new_blk_size, inode->i_ino);
|
||||
|
||||
while (cur_lblk_off <= new_blk_size) {
|
||||
map.m_lblk = cur_lblk_off;
|
||||
@@ -1096,7 +1113,7 @@ static void ext4_fc_update_stats(struct super_block *sb, int status,
|
||||
{
|
||||
struct ext4_fc_stats *stats = &EXT4_SB(sb)->s_fc_stats;
|
||||
|
||||
jbd_debug(1, "Fast commit ended with status = %d", status);
|
||||
ext4_debug("Fast commit ended with status = %d", status);
|
||||
if (status == EXT4_FC_STATUS_OK) {
|
||||
stats->fc_num_commits++;
|
||||
stats->fc_numblks += nblks;
|
||||
@@ -1266,7 +1283,7 @@ struct dentry_info_args {
|
||||
};
|
||||
|
||||
static inline void tl_to_darg(struct dentry_info_args *darg,
|
||||
struct ext4_fc_tl *tl, u8 *val)
|
||||
struct ext4_fc_tl *tl, u8 *val)
|
||||
{
|
||||
struct ext4_fc_dentry_info fcd;
|
||||
|
||||
@@ -1275,8 +1292,14 @@ static inline void tl_to_darg(struct dentry_info_args *darg,
|
||||
darg->parent_ino = le32_to_cpu(fcd.fc_parent_ino);
|
||||
darg->ino = le32_to_cpu(fcd.fc_ino);
|
||||
darg->dname = val + offsetof(struct ext4_fc_dentry_info, fc_dname);
|
||||
darg->dname_len = le16_to_cpu(tl->fc_len) -
|
||||
sizeof(struct ext4_fc_dentry_info);
|
||||
darg->dname_len = tl->fc_len - sizeof(struct ext4_fc_dentry_info);
|
||||
}
|
||||
|
||||
static inline void ext4_fc_get_tl(struct ext4_fc_tl *tl, u8 *val)
|
||||
{
|
||||
memcpy(tl, val, EXT4_FC_TAG_BASE_LEN);
|
||||
tl->fc_len = le16_to_cpu(tl->fc_len);
|
||||
tl->fc_tag = le16_to_cpu(tl->fc_tag);
|
||||
}
|
||||
|
||||
/* Unlink replay function */
|
||||
@@ -1298,19 +1321,19 @@ static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
|
||||
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode %d not found", darg.ino);
|
||||
ext4_debug("Inode %d not found", darg.ino);
|
||||
return 0;
|
||||
}
|
||||
|
||||
old_parent = ext4_iget(sb, darg.parent_ino,
|
||||
EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(old_parent)) {
|
||||
jbd_debug(1, "Dir with inode %d not found", darg.parent_ino);
|
||||
ext4_debug("Dir with inode %d not found", darg.parent_ino);
|
||||
iput(inode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = __ext4_unlink(NULL, old_parent, &entry, inode);
|
||||
ret = __ext4_unlink(old_parent, &entry, inode, NULL);
|
||||
/* -ENOENT ok coz it might not exist anymore. */
|
||||
if (ret == -ENOENT)
|
||||
ret = 0;
|
||||
@@ -1330,21 +1353,21 @@ static int ext4_fc_replay_link_internal(struct super_block *sb,
|
||||
|
||||
dir = ext4_iget(sb, darg->parent_ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(dir)) {
|
||||
jbd_debug(1, "Dir with inode %d not found.", darg->parent_ino);
|
||||
ext4_debug("Dir with inode %d not found.", darg->parent_ino);
|
||||
dir = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry_dir = d_obtain_alias(dir);
|
||||
if (IS_ERR(dentry_dir)) {
|
||||
jbd_debug(1, "Failed to obtain dentry");
|
||||
ext4_debug("Failed to obtain dentry");
|
||||
dentry_dir = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry_inode = d_alloc(dentry_dir, &qstr_dname);
|
||||
if (!dentry_inode) {
|
||||
jbd_debug(1, "Inode dentry not created.");
|
||||
ext4_debug("Inode dentry not created.");
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
@@ -1357,7 +1380,7 @@ static int ext4_fc_replay_link_internal(struct super_block *sb,
|
||||
* could complete.
|
||||
*/
|
||||
if (ret && ret != -EEXIST) {
|
||||
jbd_debug(1, "Failed to link\n");
|
||||
ext4_debug("Failed to link\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -1391,7 +1414,7 @@ static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
|
||||
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode not found.");
|
||||
ext4_debug("Inode not found.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1441,7 +1464,7 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
struct ext4_inode *raw_fc_inode;
|
||||
struct inode *inode = NULL;
|
||||
struct ext4_iloc iloc;
|
||||
int inode_len, ino, ret, tag = le16_to_cpu(tl->fc_tag);
|
||||
int inode_len, ino, ret, tag = tl->fc_tag;
|
||||
struct ext4_extent_header *eh;
|
||||
|
||||
memcpy(&fc_inode, val, sizeof(fc_inode));
|
||||
@@ -1466,7 +1489,7 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
inode_len = le16_to_cpu(tl->fc_len) - sizeof(struct ext4_fc_inode);
|
||||
inode_len = tl->fc_len - sizeof(struct ext4_fc_inode);
|
||||
raw_inode = ext4_raw_inode(&iloc);
|
||||
|
||||
memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block));
|
||||
@@ -1501,7 +1524,7 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
/* Given that we just wrote the inode on disk, this SHOULD succeed. */
|
||||
inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode not found.");
|
||||
ext4_debug("Inode not found.");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
@@ -1554,7 +1577,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
|
||||
inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "inode %d not found.", darg.ino);
|
||||
ext4_debug("inode %d not found.", darg.ino);
|
||||
inode = NULL;
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
@@ -1567,7 +1590,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
*/
|
||||
dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(dir)) {
|
||||
jbd_debug(1, "Dir %d not found.", darg.ino);
|
||||
ext4_debug("Dir %d not found.", darg.ino);
|
||||
goto out;
|
||||
}
|
||||
ret = ext4_init_new_dir(NULL, dir, inode);
|
||||
@@ -1655,7 +1678,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
||||
|
||||
inode = ext4_iget(sb, le32_to_cpu(fc_add_ex.fc_ino), EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode not found.");
|
||||
ext4_debug("Inode not found.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1669,7 +1692,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
||||
|
||||
cur = start;
|
||||
remaining = len;
|
||||
jbd_debug(1, "ADD_RANGE, lblk %d, pblk %lld, len %d, unwritten %d, inode %ld\n",
|
||||
ext4_debug("ADD_RANGE, lblk %d, pblk %lld, len %d, unwritten %d, inode %ld\n",
|
||||
start, start_pblk, len, ext4_ext_is_unwritten(ex),
|
||||
inode->i_ino);
|
||||
|
||||
@@ -1730,7 +1753,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
|
||||
}
|
||||
|
||||
/* Range is mapped and needs a state change */
|
||||
jbd_debug(1, "Converting from %ld to %d %lld",
|
||||
ext4_debug("Converting from %ld to %d %lld",
|
||||
map.m_flags & EXT4_MAP_UNWRITTEN,
|
||||
ext4_ext_is_unwritten(ex), map.m_pblk);
|
||||
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
|
||||
@@ -1773,7 +1796,7 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
|
||||
inode = ext4_iget(sb, le32_to_cpu(lrange.fc_ino), EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange.fc_ino));
|
||||
ext4_debug("Inode %d not found", le32_to_cpu(lrange.fc_ino));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1781,7 +1804,7 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
|
||||
ext4_debug("DEL_RANGE, inode %ld, lblk %d, len %d\n",
|
||||
inode->i_ino, le32_to_cpu(lrange.fc_lblk),
|
||||
le32_to_cpu(lrange.fc_len));
|
||||
while (remaining > 0) {
|
||||
@@ -1830,7 +1853,7 @@ static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb)
|
||||
inode = ext4_iget(sb, state->fc_modified_inodes[i],
|
||||
EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
jbd_debug(1, "Inode %d not found.",
|
||||
ext4_debug("Inode %d not found.",
|
||||
state->fc_modified_inodes[i]);
|
||||
continue;
|
||||
}
|
||||
@@ -1896,6 +1919,33 @@ void ext4_fc_replay_cleanup(struct super_block *sb)
|
||||
kfree(sbi->s_fc_replay_state.fc_modified_inodes);
|
||||
}
|
||||
|
||||
static bool ext4_fc_value_len_isvalid(struct ext4_sb_info *sbi,
|
||||
int tag, int len)
|
||||
{
|
||||
switch (tag) {
|
||||
case EXT4_FC_TAG_ADD_RANGE:
|
||||
return len == sizeof(struct ext4_fc_add_range);
|
||||
case EXT4_FC_TAG_DEL_RANGE:
|
||||
return len == sizeof(struct ext4_fc_del_range);
|
||||
case EXT4_FC_TAG_CREAT:
|
||||
case EXT4_FC_TAG_LINK:
|
||||
case EXT4_FC_TAG_UNLINK:
|
||||
len -= sizeof(struct ext4_fc_dentry_info);
|
||||
return len >= 1 && len <= EXT4_NAME_LEN;
|
||||
case EXT4_FC_TAG_INODE:
|
||||
len -= sizeof(struct ext4_fc_inode);
|
||||
return len >= EXT4_GOOD_OLD_INODE_SIZE &&
|
||||
len <= sbi->s_inode_size;
|
||||
case EXT4_FC_TAG_PAD:
|
||||
return true; /* padding can have any length */
|
||||
case EXT4_FC_TAG_TAIL:
|
||||
return len >= sizeof(struct ext4_fc_tail);
|
||||
case EXT4_FC_TAG_HEAD:
|
||||
return len == sizeof(struct ext4_fc_head);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Recovery Scan phase handler
|
||||
*
|
||||
@@ -1931,7 +1981,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
|
||||
state = &sbi->s_fc_replay_state;
|
||||
|
||||
start = (u8 *)bh->b_data;
|
||||
end = (__u8 *)bh->b_data + journal->j_blocksize - 1;
|
||||
end = start + journal->j_blocksize;
|
||||
|
||||
if (state->fc_replay_expected_off == 0) {
|
||||
state->fc_cur_tag = 0;
|
||||
@@ -1952,12 +2002,19 @@ static int ext4_fc_replay_scan(journal_t *journal,
|
||||
}
|
||||
|
||||
state->fc_replay_expected_off++;
|
||||
for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
|
||||
memcpy(&tl, cur, sizeof(tl));
|
||||
val = cur + sizeof(tl);
|
||||
jbd_debug(3, "Scan phase, tag:%s, blk %lld\n",
|
||||
tag2str(le16_to_cpu(tl.fc_tag)), bh->b_blocknr);
|
||||
switch (le16_to_cpu(tl.fc_tag)) {
|
||||
for (cur = start; cur <= end - EXT4_FC_TAG_BASE_LEN;
|
||||
cur = cur + EXT4_FC_TAG_BASE_LEN + tl.fc_len) {
|
||||
ext4_fc_get_tl(&tl, cur);
|
||||
val = cur + EXT4_FC_TAG_BASE_LEN;
|
||||
if (tl.fc_len > end - val ||
|
||||
!ext4_fc_value_len_isvalid(sbi, tl.fc_tag, tl.fc_len)) {
|
||||
ret = state->fc_replay_num_tags ?
|
||||
JBD2_FC_REPLAY_STOP : -ECANCELED;
|
||||
goto out_err;
|
||||
}
|
||||
ext4_debug("Scan phase, tag:%s, blk %lld\n",
|
||||
tag2str(tl.fc_tag), bh->b_blocknr);
|
||||
switch (tl.fc_tag) {
|
||||
case EXT4_FC_TAG_ADD_RANGE:
|
||||
memcpy(&ext, val, sizeof(ext));
|
||||
ex = (struct ext4_extent *)&ext.fc_ex;
|
||||
@@ -1977,13 +2034,13 @@ static int ext4_fc_replay_scan(journal_t *journal,
|
||||
case EXT4_FC_TAG_PAD:
|
||||
state->fc_cur_tag++;
|
||||
state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
|
||||
sizeof(tl) + le16_to_cpu(tl.fc_len));
|
||||
EXT4_FC_TAG_BASE_LEN + tl.fc_len);
|
||||
break;
|
||||
case EXT4_FC_TAG_TAIL:
|
||||
state->fc_cur_tag++;
|
||||
memcpy(&tail, val, sizeof(tail));
|
||||
state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
|
||||
sizeof(tl) +
|
||||
EXT4_FC_TAG_BASE_LEN +
|
||||
offsetof(struct ext4_fc_tail,
|
||||
fc_crc));
|
||||
if (le32_to_cpu(tail.fc_tid) == expected_tid &&
|
||||
@@ -2010,7 +2067,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
|
||||
}
|
||||
state->fc_cur_tag++;
|
||||
state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
|
||||
sizeof(tl) + le16_to_cpu(tl.fc_len));
|
||||
EXT4_FC_TAG_BASE_LEN + tl.fc_len);
|
||||
break;
|
||||
default:
|
||||
ret = state->fc_replay_num_tags ?
|
||||
@@ -2050,7 +2107,7 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
|
||||
sbi->s_mount_state |= EXT4_FC_REPLAY;
|
||||
}
|
||||
if (!sbi->s_fc_replay_state.fc_replay_num_tags) {
|
||||
jbd_debug(1, "Replay stops\n");
|
||||
ext4_debug("Replay stops\n");
|
||||
ext4_fc_set_bitmaps_and_counters(sb);
|
||||
return 0;
|
||||
}
|
||||
@@ -2063,21 +2120,22 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
|
||||
#endif
|
||||
|
||||
start = (u8 *)bh->b_data;
|
||||
end = (__u8 *)bh->b_data + journal->j_blocksize - 1;
|
||||
end = start + journal->j_blocksize;
|
||||
|
||||
for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
|
||||
memcpy(&tl, cur, sizeof(tl));
|
||||
val = cur + sizeof(tl);
|
||||
for (cur = start; cur <= end - EXT4_FC_TAG_BASE_LEN;
|
||||
cur = cur + EXT4_FC_TAG_BASE_LEN + tl.fc_len) {
|
||||
ext4_fc_get_tl(&tl, cur);
|
||||
val = cur + EXT4_FC_TAG_BASE_LEN;
|
||||
|
||||
if (state->fc_replay_num_tags == 0) {
|
||||
ret = JBD2_FC_REPLAY_STOP;
|
||||
ext4_fc_set_bitmaps_and_counters(sb);
|
||||
break;
|
||||
}
|
||||
jbd_debug(3, "Replay phase, tag:%s\n",
|
||||
tag2str(le16_to_cpu(tl.fc_tag)));
|
||||
|
||||
ext4_debug("Replay phase, tag:%s\n", tag2str(tl.fc_tag));
|
||||
state->fc_replay_num_tags--;
|
||||
switch (le16_to_cpu(tl.fc_tag)) {
|
||||
switch (tl.fc_tag) {
|
||||
case EXT4_FC_TAG_LINK:
|
||||
ret = ext4_fc_replay_link(sb, &tl, val);
|
||||
break;
|
||||
@@ -2098,19 +2156,18 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
|
||||
break;
|
||||
case EXT4_FC_TAG_PAD:
|
||||
trace_ext4_fc_replay(sb, EXT4_FC_TAG_PAD, 0,
|
||||
le16_to_cpu(tl.fc_len), 0);
|
||||
tl.fc_len, 0);
|
||||
break;
|
||||
case EXT4_FC_TAG_TAIL:
|
||||
trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL, 0,
|
||||
le16_to_cpu(tl.fc_len), 0);
|
||||
trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL,
|
||||
0, tl.fc_len, 0);
|
||||
memcpy(&tail, val, sizeof(tail));
|
||||
WARN_ON(le32_to_cpu(tail.fc_tid) != expected_tid);
|
||||
break;
|
||||
case EXT4_FC_TAG_HEAD:
|
||||
break;
|
||||
default:
|
||||
trace_ext4_fc_replay(sb, le16_to_cpu(tl.fc_tag), 0,
|
||||
le16_to_cpu(tl.fc_len), 0);
|
||||
trace_ext4_fc_replay(sb, tl.fc_tag, 0, tl.fc_len, 0);
|
||||
ret = -ECANCELED;
|
||||
break;
|
||||
}
|
||||
@@ -2134,17 +2191,17 @@ void ext4_fc_init(struct super_block *sb, journal_t *journal)
|
||||
journal->j_fc_cleanup_callback = ext4_fc_cleanup;
|
||||
}
|
||||
|
||||
static const char *fc_ineligible_reasons[] = {
|
||||
"Extended attributes changed",
|
||||
"Cross rename",
|
||||
"Journal flag changed",
|
||||
"Insufficient memory",
|
||||
"Swap boot",
|
||||
"Resize",
|
||||
"Dir renamed",
|
||||
"Falloc range op",
|
||||
"Data journalling",
|
||||
"FC Commit Failed"
|
||||
static const char * const fc_ineligible_reasons[] = {
|
||||
[EXT4_FC_REASON_XATTR] = "Extended attributes changed",
|
||||
[EXT4_FC_REASON_CROSS_RENAME] = "Cross rename",
|
||||
[EXT4_FC_REASON_JOURNAL_FLAG_CHANGE] = "Journal flag changed",
|
||||
[EXT4_FC_REASON_NOMEM] = "Insufficient memory",
|
||||
[EXT4_FC_REASON_SWAP_BOOT] = "Swap boot",
|
||||
[EXT4_FC_REASON_RESIZE] = "Resize",
|
||||
[EXT4_FC_REASON_RENAME_DIR] = "Dir renamed",
|
||||
[EXT4_FC_REASON_FALLOC_RANGE] = "Falloc range op",
|
||||
[EXT4_FC_REASON_INODE_JOURNAL_DATA] = "Data journalling",
|
||||
[EXT4_FC_REASON_ENCRYPTED_FILENAME] = "Encrypted filename",
|
||||
};
|
||||
|
||||
int ext4_fc_info_show(struct seq_file *seq, void *v)
|
||||
|
||||
@@ -58,7 +58,7 @@ struct ext4_fc_dentry_info {
|
||||
__u8 fc_dname[0];
|
||||
};
|
||||
|
||||
/* Value structure for EXT4_FC_TAG_INODE and EXT4_FC_TAG_INODE_PARTIAL. */
|
||||
/* Value structure for EXT4_FC_TAG_INODE. */
|
||||
struct ext4_fc_inode {
|
||||
__le32 fc_ino;
|
||||
__u8 fc_raw_inode[0];
|
||||
@@ -70,6 +70,9 @@ struct ext4_fc_tail {
|
||||
__le32 fc_crc;
|
||||
};
|
||||
|
||||
/* Tag base length */
|
||||
#define EXT4_FC_TAG_BASE_LEN (sizeof(struct ext4_fc_tl))
|
||||
|
||||
/*
|
||||
* Fast commit status codes
|
||||
*/
|
||||
@@ -93,7 +96,7 @@ enum {
|
||||
EXT4_FC_REASON_RENAME_DIR,
|
||||
EXT4_FC_REASON_FALLOC_RANGE,
|
||||
EXT4_FC_REASON_INODE_JOURNAL_DATA,
|
||||
EXT4_FC_COMMIT_FAILED,
|
||||
EXT4_FC_REASON_ENCRYPTED_FILENAME,
|
||||
EXT4_FC_REASON_MAX
|
||||
};
|
||||
|
||||
|
||||
@@ -148,6 +148,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
|
||||
struct super_block *sb = inode->i_sb;
|
||||
Indirect *p = chain;
|
||||
struct buffer_head *bh;
|
||||
unsigned int key;
|
||||
int ret = -EIO;
|
||||
|
||||
*err = 0;
|
||||
@@ -156,7 +157,13 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
|
||||
if (!p->key)
|
||||
goto no_block;
|
||||
while (--depth) {
|
||||
bh = sb_getblk(sb, le32_to_cpu(p->key));
|
||||
key = le32_to_cpu(p->key);
|
||||
if (key > ext4_blocks_count(EXT4_SB(sb)->s_es)) {
|
||||
/* the block was out of range */
|
||||
ret = -EFSCORRUPTED;
|
||||
goto failure;
|
||||
}
|
||||
bh = sb_getblk(sb, key);
|
||||
if (unlikely(!bh)) {
|
||||
ret = -ENOMEM;
|
||||
goto failure;
|
||||
@@ -460,7 +467,7 @@ static int ext4_splice_branch(handle_t *handle,
|
||||
* the new i_size. But that is not done here - it is done in
|
||||
* generic_commit_write->__mark_inode_dirty->ext4_dirty_inode.
|
||||
*/
|
||||
jbd_debug(5, "splicing indirect only\n");
|
||||
ext4_debug("splicing indirect only\n");
|
||||
BUFFER_TRACE(where->bh, "call ext4_handle_dirty_metadata");
|
||||
err = ext4_handle_dirty_metadata(handle, ar->inode, where->bh);
|
||||
if (err)
|
||||
@@ -472,7 +479,7 @@ static int ext4_splice_branch(handle_t *handle,
|
||||
err = ext4_mark_inode_dirty(handle, ar->inode);
|
||||
if (unlikely(err))
|
||||
goto err_out;
|
||||
jbd_debug(5, "splicing direct\n");
|
||||
ext4_debug("splicing direct\n");
|
||||
}
|
||||
return err;
|
||||
|
||||
|
||||
@@ -225,13 +225,13 @@ void ext4_evict_inode(struct inode *inode)
|
||||
|
||||
/*
|
||||
* For inodes with journalled data, transaction commit could have
|
||||
* dirtied the inode. Flush worker is ignoring it because of I_FREEING
|
||||
* flag but we still need to remove the inode from the writeback lists.
|
||||
* dirtied the inode. And for inodes with dioread_nolock, unwritten
|
||||
* extents converting worker could merge extents and also have dirtied
|
||||
* the inode. Flush worker is ignoring it because of I_FREEING flag but
|
||||
* we still need to remove the inode from the writeback lists.
|
||||
*/
|
||||
if (!list_empty_careful(&inode->i_io_list)) {
|
||||
WARN_ON_ONCE(!ext4_should_journal_data(inode));
|
||||
if (!list_empty_careful(&inode->i_io_list))
|
||||
inode_io_list_del(inode);
|
||||
}
|
||||
|
||||
/*
|
||||
* Protect us against freezing - iput() caller didn't have to have any
|
||||
@@ -338,6 +338,12 @@ stop_handle:
|
||||
ext4_xattr_inode_array_free(ea_inode_array);
|
||||
return;
|
||||
no_delete:
|
||||
/*
|
||||
* Check out some where else accidentally dirty the evicting inode,
|
||||
* which may probably cause inode use-after-free issues later.
|
||||
*/
|
||||
WARN_ON_ONCE(!list_empty_careful(&inode->i_io_list));
|
||||
|
||||
if (!list_empty(&EXT4_I(inode)->i_fc_list))
|
||||
ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM, NULL);
|
||||
ext4_clear_inode(inode); /* We must guarantee clearing of inode... */
|
||||
@@ -1298,7 +1304,8 @@ static int ext4_write_end(struct file *file,
|
||||
|
||||
trace_ext4_write_end(inode, pos, len, copied);
|
||||
|
||||
if (ext4_has_inline_data(inode))
|
||||
if (ext4_has_inline_data(inode) &&
|
||||
ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
|
||||
return ext4_write_inline_data_end(inode, pos, len, copied, page);
|
||||
|
||||
copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
|
||||
@@ -4205,7 +4212,8 @@ int ext4_truncate(struct inode *inode)
|
||||
|
||||
/* If we zero-out tail of the page, we have to create jinode for jbd2 */
|
||||
if (inode->i_size & (inode->i_sb->s_blocksize - 1)) {
|
||||
if (ext4_inode_attach_jinode(inode) < 0)
|
||||
err = ext4_inode_attach_jinode(inode);
|
||||
if (err)
|
||||
goto out_trace;
|
||||
}
|
||||
|
||||
@@ -4306,9 +4314,17 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
|
||||
inodes_per_block = EXT4_SB(sb)->s_inodes_per_block;
|
||||
inode_offset = ((ino - 1) %
|
||||
EXT4_INODES_PER_GROUP(sb));
|
||||
block = ext4_inode_table(sb, gdp) + (inode_offset / inodes_per_block);
|
||||
iloc->offset = (inode_offset % inodes_per_block) * EXT4_INODE_SIZE(sb);
|
||||
|
||||
block = ext4_inode_table(sb, gdp);
|
||||
if ((block <= le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) ||
|
||||
(block >= ext4_blocks_count(EXT4_SB(sb)->s_es))) {
|
||||
ext4_error(sb, "Invalid inode table block %llu in "
|
||||
"block_group %u", block, iloc->block_group);
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
block += (inode_offset / inodes_per_block);
|
||||
|
||||
bh = sb_getblk(sb, block);
|
||||
if (unlikely(!bh))
|
||||
return -ENOMEM;
|
||||
@@ -4883,8 +4899,14 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
|
||||
if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb))
|
||||
ext4_error_inode(inode, function, line, 0,
|
||||
"casefold flag without casefold feature");
|
||||
brelse(iloc.bh);
|
||||
if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
|
||||
ext4_error_inode(inode, function, line, 0,
|
||||
"bad inode without EXT4_IGET_BAD flag");
|
||||
ret = -EUCLEAN;
|
||||
goto bad_inode;
|
||||
}
|
||||
|
||||
brelse(iloc.bh);
|
||||
unlock_new_inode(inode);
|
||||
return inode;
|
||||
|
||||
@@ -5205,7 +5227,7 @@ int ext4_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
|
||||
if (EXT4_SB(inode->i_sb)->s_journal) {
|
||||
if (ext4_journal_current_handle()) {
|
||||
jbd_debug(1, "called recursively, non-PF_MEMALLOC!\n");
|
||||
ext4_debug("called recursively, non-PF_MEMALLOC!\n");
|
||||
dump_stack();
|
||||
return -EIO;
|
||||
}
|
||||
@@ -5798,6 +5820,14 @@ static int __ext4_expand_extra_isize(struct inode *inode,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* We may need to allocate external xattr block so we need quotas
|
||||
* initialized. Here we can be called with various locks held so we
|
||||
* cannot affort to initialize quotas ourselves. So just bail.
|
||||
*/
|
||||
if (dquot_initialize_needed(inode))
|
||||
return -EAGAIN;
|
||||
|
||||
/* try to expand with EAs present */
|
||||
error = ext4_expand_extra_isize_ea(inode, new_extra_isize,
|
||||
raw_inode, handle);
|
||||
|
||||
@@ -124,7 +124,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
|
||||
blkcnt_t blocks;
|
||||
unsigned short bytes;
|
||||
|
||||
inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
|
||||
inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO,
|
||||
EXT4_IGET_SPECIAL | EXT4_IGET_BAD);
|
||||
if (IS_ERR(inode_bl))
|
||||
return PTR_ERR(inode_bl);
|
||||
ei_bl = EXT4_I(inode_bl);
|
||||
@@ -174,7 +175,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
|
||||
/* Protect extent tree against block allocations via delalloc */
|
||||
ext4_double_down_write_data_sem(inode, inode_bl);
|
||||
|
||||
if (inode_bl->i_nlink == 0) {
|
||||
if (is_bad_inode(inode_bl) || !S_ISREG(inode_bl->i_mode)) {
|
||||
/* this inode has never been used as a BOOT_LOADER */
|
||||
set_nlink(inode_bl, 1);
|
||||
i_uid_write(inode_bl, 0);
|
||||
@@ -491,6 +492,10 @@ static int ext4_ioctl_setproject(struct inode *inode, __u32 projid)
|
||||
if (ext4_is_quota_file(inode))
|
||||
return err;
|
||||
|
||||
err = dquot_initialize(inode);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = ext4_get_inode_loc(inode, &iloc);
|
||||
if (err)
|
||||
return err;
|
||||
@@ -506,10 +511,6 @@ static int ext4_ioctl_setproject(struct inode *inode, __u32 projid)
|
||||
brelse(iloc.bh);
|
||||
}
|
||||
|
||||
err = dquot_initialize(inode);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
|
||||
EXT4_QUOTA_INIT_BLOCKS(sb) +
|
||||
EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
|
||||
|
||||
@@ -3204,14 +3204,20 @@ end_rmdir:
|
||||
return retval;
|
||||
}
|
||||
|
||||
int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name,
|
||||
struct inode *inode)
|
||||
int __ext4_unlink(struct inode *dir, const struct qstr *d_name,
|
||||
struct inode *inode,
|
||||
struct dentry *dentry /* NULL during fast_commit recovery */)
|
||||
{
|
||||
int retval = -ENOENT;
|
||||
struct buffer_head *bh;
|
||||
struct ext4_dir_entry_2 *de;
|
||||
handle_t *handle;
|
||||
int skip_remove_dentry = 0;
|
||||
|
||||
/*
|
||||
* Keep this outside the transaction; it may have to set up the
|
||||
* directory's encryption key, which isn't GFP_NOFS-safe.
|
||||
*/
|
||||
bh = ext4_find_entry(dir, d_name, &de, NULL);
|
||||
if (IS_ERR(bh))
|
||||
return PTR_ERR(bh);
|
||||
@@ -3228,7 +3234,14 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
|
||||
if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
|
||||
skip_remove_dentry = 1;
|
||||
else
|
||||
goto out;
|
||||
goto out_bh;
|
||||
}
|
||||
|
||||
handle = ext4_journal_start(dir, EXT4_HT_DIR,
|
||||
EXT4_DATA_TRANS_BLOCKS(dir->i_sb));
|
||||
if (IS_ERR(handle)) {
|
||||
retval = PTR_ERR(handle);
|
||||
goto out_bh;
|
||||
}
|
||||
|
||||
if (IS_DIRSYNC(dir))
|
||||
@@ -3237,12 +3250,12 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
|
||||
if (!skip_remove_dentry) {
|
||||
retval = ext4_delete_entry(handle, dir, de, bh);
|
||||
if (retval)
|
||||
goto out;
|
||||
goto out_handle;
|
||||
dir->i_ctime = dir->i_mtime = current_time(dir);
|
||||
ext4_update_dx_flag(dir);
|
||||
retval = ext4_mark_inode_dirty(handle, dir);
|
||||
if (retval)
|
||||
goto out;
|
||||
goto out_handle;
|
||||
} else {
|
||||
retval = 0;
|
||||
}
|
||||
@@ -3255,15 +3268,17 @@ int __ext4_unlink(handle_t *handle, struct inode *dir, const struct qstr *d_name
|
||||
ext4_orphan_add(handle, inode);
|
||||
inode->i_ctime = current_time(inode);
|
||||
retval = ext4_mark_inode_dirty(handle, inode);
|
||||
|
||||
out:
|
||||
if (dentry && !retval)
|
||||
ext4_fc_track_unlink(handle, dentry);
|
||||
out_handle:
|
||||
ext4_journal_stop(handle);
|
||||
out_bh:
|
||||
brelse(bh);
|
||||
return retval;
|
||||
}
|
||||
|
||||
static int ext4_unlink(struct inode *dir, struct dentry *dentry)
|
||||
{
|
||||
handle_t *handle;
|
||||
int retval;
|
||||
|
||||
if (unlikely(ext4_forced_shutdown(EXT4_SB(dir->i_sb))))
|
||||
@@ -3281,16 +3296,7 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
|
||||
if (retval)
|
||||
goto out_trace;
|
||||
|
||||
handle = ext4_journal_start(dir, EXT4_HT_DIR,
|
||||
EXT4_DATA_TRANS_BLOCKS(dir->i_sb));
|
||||
if (IS_ERR(handle)) {
|
||||
retval = PTR_ERR(handle);
|
||||
goto out_trace;
|
||||
}
|
||||
|
||||
retval = __ext4_unlink(handle, dir, &dentry->d_name, d_inode(dentry));
|
||||
if (!retval)
|
||||
ext4_fc_track_unlink(handle, dentry);
|
||||
retval = __ext4_unlink(dir, &dentry->d_name, d_inode(dentry), dentry);
|
||||
#ifdef CONFIG_UNICODE
|
||||
/* VFS negative dentries are incompatible with Encoding and
|
||||
* Case-insensitiveness. Eventually we'll want avoid
|
||||
@@ -3301,8 +3307,6 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
|
||||
if (IS_CASEFOLDED(dir))
|
||||
d_invalidate(dentry);
|
||||
#endif
|
||||
if (handle)
|
||||
ext4_journal_stop(handle);
|
||||
|
||||
out_trace:
|
||||
trace_ext4_unlink_exit(dentry, retval);
|
||||
@@ -3806,6 +3810,9 @@ static int ext4_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
|
||||
return -EXDEV;
|
||||
|
||||
retval = dquot_initialize(old.dir);
|
||||
if (retval)
|
||||
return retval;
|
||||
retval = dquot_initialize(old.inode);
|
||||
if (retval)
|
||||
return retval;
|
||||
retval = dquot_initialize(new.dir);
|
||||
|
||||
@@ -181,8 +181,8 @@ int ext4_orphan_add(handle_t *handle, struct inode *inode)
|
||||
} else
|
||||
brelse(iloc.bh);
|
||||
|
||||
jbd_debug(4, "superblock will point to %lu\n", inode->i_ino);
|
||||
jbd_debug(4, "orphan inode %lu will point to %d\n",
|
||||
ext4_debug("superblock will point to %lu\n", inode->i_ino);
|
||||
ext4_debug("orphan inode %lu will point to %d\n",
|
||||
inode->i_ino, NEXT_ORPHAN(inode));
|
||||
out:
|
||||
ext4_std_error(sb, err);
|
||||
@@ -251,7 +251,7 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode)
|
||||
}
|
||||
|
||||
mutex_lock(&sbi->s_orphan_lock);
|
||||
jbd_debug(4, "remove inode %lu from orphan list\n", inode->i_ino);
|
||||
ext4_debug("remove inode %lu from orphan list\n", inode->i_ino);
|
||||
|
||||
prev = ei->i_orphan.prev;
|
||||
list_del_init(&ei->i_orphan);
|
||||
@@ -267,7 +267,7 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode)
|
||||
|
||||
ino_next = NEXT_ORPHAN(inode);
|
||||
if (prev == &sbi->s_orphan) {
|
||||
jbd_debug(4, "superblock will point to %u\n", ino_next);
|
||||
ext4_debug("superblock will point to %u\n", ino_next);
|
||||
BUFFER_TRACE(sbi->s_sbh, "get_write_access");
|
||||
err = ext4_journal_get_write_access(handle, inode->i_sb,
|
||||
sbi->s_sbh, EXT4_JTR_NONE);
|
||||
@@ -286,7 +286,7 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode)
|
||||
struct inode *i_prev =
|
||||
&list_entry(prev, struct ext4_inode_info, i_orphan)->vfs_inode;
|
||||
|
||||
jbd_debug(4, "orphan inode %lu will point to %u\n",
|
||||
ext4_debug("orphan inode %lu will point to %u\n",
|
||||
i_prev->i_ino, ino_next);
|
||||
err = ext4_reserve_inode_write(handle, i_prev, &iloc2);
|
||||
if (err) {
|
||||
@@ -332,8 +332,8 @@ static void ext4_process_orphan(struct inode *inode,
|
||||
ext4_msg(sb, KERN_DEBUG,
|
||||
"%s: truncating inode %lu to %lld bytes",
|
||||
__func__, inode->i_ino, inode->i_size);
|
||||
jbd_debug(2, "truncating inode %lu to %lld bytes\n",
|
||||
inode->i_ino, inode->i_size);
|
||||
ext4_debug("truncating inode %lu to %lld bytes\n",
|
||||
inode->i_ino, inode->i_size);
|
||||
inode_lock(inode);
|
||||
truncate_inode_pages(inode->i_mapping, inode->i_size);
|
||||
ret = ext4_truncate(inode);
|
||||
@@ -353,8 +353,8 @@ static void ext4_process_orphan(struct inode *inode,
|
||||
ext4_msg(sb, KERN_DEBUG,
|
||||
"%s: deleting unreferenced inode %lu",
|
||||
__func__, inode->i_ino);
|
||||
jbd_debug(2, "deleting unreferenced inode %lu\n",
|
||||
inode->i_ino);
|
||||
ext4_debug("deleting unreferenced inode %lu\n",
|
||||
inode->i_ino);
|
||||
(*nr_orphans)++;
|
||||
}
|
||||
iput(inode); /* The delete magic happens here! */
|
||||
@@ -391,7 +391,7 @@ void ext4_orphan_cleanup(struct super_block *sb, struct ext4_super_block *es)
|
||||
int inodes_per_ob = ext4_inodes_per_orphan_block(sb);
|
||||
|
||||
if (!es->s_last_orphan && !oi->of_blocks) {
|
||||
jbd_debug(4, "no orphan inodes to clean up\n");
|
||||
ext4_debug("no orphan inodes to clean up\n");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -412,10 +412,10 @@ void ext4_orphan_cleanup(struct super_block *sb, struct ext4_super_block *es)
|
||||
/* don't clear list on RO mount w/ errors */
|
||||
if (es->s_last_orphan && !(s_flags & SB_RDONLY)) {
|
||||
ext4_msg(sb, KERN_INFO, "Errors on filesystem, "
|
||||
"clearing orphan list.\n");
|
||||
"clearing orphan list.");
|
||||
es->s_last_orphan = 0;
|
||||
}
|
||||
jbd_debug(1, "Skipping orphan recovery on fs with errors.\n");
|
||||
ext4_debug("Skipping orphan recovery on fs with errors.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -459,7 +459,7 @@ void ext4_orphan_cleanup(struct super_block *sb, struct ext4_super_block *es)
|
||||
* so, skip the rest.
|
||||
*/
|
||||
if (EXT4_SB(sb)->s_mount_state & EXT4_ERROR_FS) {
|
||||
jbd_debug(1, "Skipping orphan recovery on fs with errors.\n");
|
||||
ext4_debug("Skipping orphan recovery on fs with errors.\n");
|
||||
es->s_last_orphan = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -1557,8 +1557,8 @@ exit_journal:
|
||||
int meta_bg = ext4_has_feature_meta_bg(sb);
|
||||
sector_t old_gdb = 0;
|
||||
|
||||
update_backups(sb, sbi->s_sbh->b_blocknr, (char *)es,
|
||||
sizeof(struct ext4_super_block), 0);
|
||||
update_backups(sb, ext4_group_first_block_no(sb, 0),
|
||||
(char *)es, sizeof(struct ext4_super_block), 0);
|
||||
for (; gdb_num <= gdb_num_end; gdb_num++) {
|
||||
struct buffer_head *gdb_bh;
|
||||
|
||||
@@ -1769,7 +1769,7 @@ errout:
|
||||
if (test_opt(sb, DEBUG))
|
||||
printk(KERN_DEBUG "EXT4-fs: extended group to %llu "
|
||||
"blocks\n", ext4_blocks_count(es));
|
||||
update_backups(sb, EXT4_SB(sb)->s_sbh->b_blocknr,
|
||||
update_backups(sb, ext4_group_first_block_no(sb, 0),
|
||||
(char *)es, sizeof(struct ext4_super_block), 0);
|
||||
}
|
||||
return err;
|
||||
|
||||
@@ -1288,6 +1288,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
|
||||
return NULL;
|
||||
|
||||
inode_set_iversion(&ei->vfs_inode, 1);
|
||||
ei->i_flags = 0;
|
||||
spin_lock_init(&ei->i_raw_lock);
|
||||
INIT_LIST_HEAD(&ei->i_prealloc_list);
|
||||
atomic_set(&ei->i_prealloc_active, 0);
|
||||
@@ -4662,30 +4663,31 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
||||
ext4_has_feature_journal_needs_recovery(sb)) {
|
||||
ext4_msg(sb, KERN_ERR, "required journal recovery "
|
||||
"suppressed and not mounted read-only");
|
||||
goto failed_mount_wq;
|
||||
goto failed_mount3a;
|
||||
} else {
|
||||
/* Nojournal mode, all journal mount options are illegal */
|
||||
if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
|
||||
ext4_msg(sb, KERN_ERR, "can't mount with "
|
||||
"journal_checksum, fs mounted w/o journal");
|
||||
goto failed_mount_wq;
|
||||
}
|
||||
if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
|
||||
ext4_msg(sb, KERN_ERR, "can't mount with "
|
||||
"journal_async_commit, fs mounted w/o journal");
|
||||
goto failed_mount_wq;
|
||||
goto failed_mount3a;
|
||||
}
|
||||
|
||||
if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
|
||||
ext4_msg(sb, KERN_ERR, "can't mount with "
|
||||
"journal_checksum, fs mounted w/o journal");
|
||||
goto failed_mount3a;
|
||||
}
|
||||
if (sbi->s_commit_interval != JBD2_DEFAULT_MAX_COMMIT_AGE*HZ) {
|
||||
ext4_msg(sb, KERN_ERR, "can't mount with "
|
||||
"commit=%lu, fs mounted w/o journal",
|
||||
sbi->s_commit_interval / HZ);
|
||||
goto failed_mount_wq;
|
||||
goto failed_mount3a;
|
||||
}
|
||||
if (EXT4_MOUNT_DATA_FLAGS &
|
||||
(sbi->s_mount_opt ^ sbi->s_def_mount_opt)) {
|
||||
ext4_msg(sb, KERN_ERR, "can't mount with "
|
||||
"data=, fs mounted w/o journal");
|
||||
goto failed_mount_wq;
|
||||
goto failed_mount3a;
|
||||
}
|
||||
sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
|
||||
clear_opt(sb, JOURNAL_CHECKSUM);
|
||||
@@ -5152,9 +5154,9 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
jbd_debug(2, "Journal inode found at %p: %lld bytes\n",
|
||||
ext4_debug("Journal inode found at %p: %lld bytes\n",
|
||||
journal_inode, journal_inode->i_size);
|
||||
if (!S_ISREG(journal_inode->i_mode)) {
|
||||
if (!S_ISREG(journal_inode->i_mode) || IS_ENCRYPTED(journal_inode)) {
|
||||
ext4_msg(sb, KERN_ERR, "invalid journal inode");
|
||||
iput(journal_inode);
|
||||
return NULL;
|
||||
@@ -6308,6 +6310,20 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
|
||||
return err;
|
||||
}
|
||||
|
||||
static inline bool ext4_check_quota_inum(int type, unsigned long qf_inum)
|
||||
{
|
||||
switch (type) {
|
||||
case USRQUOTA:
|
||||
return qf_inum == EXT4_USR_QUOTA_INO;
|
||||
case GRPQUOTA:
|
||||
return qf_inum == EXT4_GRP_QUOTA_INO;
|
||||
case PRJQUOTA:
|
||||
return qf_inum >= EXT4_GOOD_OLD_FIRST_INO;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
}
|
||||
|
||||
static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
|
||||
unsigned int flags)
|
||||
{
|
||||
@@ -6324,9 +6340,16 @@ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
|
||||
if (!qf_inums[type])
|
||||
return -EPERM;
|
||||
|
||||
if (!ext4_check_quota_inum(type, qf_inums[type])) {
|
||||
ext4_error(sb, "Bad quota inum: %lu, type: %d",
|
||||
qf_inums[type], type);
|
||||
return -EUCLEAN;
|
||||
}
|
||||
|
||||
qf_inode = ext4_iget(sb, qf_inums[type], EXT4_IGET_SPECIAL);
|
||||
if (IS_ERR(qf_inode)) {
|
||||
ext4_error(sb, "Bad quota inode # %lu", qf_inums[type]);
|
||||
ext4_error(sb, "Bad quota inode: %lu, type: %d",
|
||||
qf_inums[type], type);
|
||||
return PTR_ERR(qf_inode);
|
||||
}
|
||||
|
||||
@@ -6365,8 +6388,9 @@ int ext4_enable_quotas(struct super_block *sb)
|
||||
if (err) {
|
||||
ext4_warning(sb,
|
||||
"Failed to enable quota tracking "
|
||||
"(type=%d, err=%d). Please run "
|
||||
"e2fsck to fix.", type, err);
|
||||
"(type=%d, err=%d, ino=%lu). "
|
||||
"Please run e2fsck to fix.", type,
|
||||
err, qf_inums[type]);
|
||||
for (type--; type >= 0; type--) {
|
||||
struct inode *inode;
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
|
||||
size_t n = min_t(size_t, count,
|
||||
PAGE_SIZE - offset_in_page(pos));
|
||||
struct page *page;
|
||||
void *fsdata;
|
||||
void *fsdata = NULL;
|
||||
int res;
|
||||
|
||||
res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0,
|
||||
|
||||
@@ -1281,7 +1281,7 @@ retry_ref:
|
||||
ce = mb_cache_entry_get(ea_block_cache, hash,
|
||||
bh->b_blocknr);
|
||||
if (ce) {
|
||||
ce->e_reusable = 1;
|
||||
set_bit(MBE_REUSABLE_B, &ce->e_flags);
|
||||
mb_cache_entry_put(ea_block_cache, ce);
|
||||
}
|
||||
}
|
||||
@@ -1441,6 +1441,9 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
|
||||
if (!err)
|
||||
err = ext4_inode_attach_jinode(ea_inode);
|
||||
if (err) {
|
||||
if (ext4_xattr_inode_dec_ref(handle, ea_inode))
|
||||
ext4_warning_inode(ea_inode,
|
||||
"cleanup dec ref error %d", err);
|
||||
iput(ea_inode);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
@@ -2042,7 +2045,7 @@ inserted:
|
||||
}
|
||||
BHDR(new_bh)->h_refcount = cpu_to_le32(ref);
|
||||
if (ref == EXT4_XATTR_REFCOUNT_MAX)
|
||||
ce->e_reusable = 0;
|
||||
clear_bit(MBE_REUSABLE_B, &ce->e_flags);
|
||||
ea_bdebug(new_bh, "reusing; refcount now=%d",
|
||||
ref);
|
||||
ext4_xattr_block_csum_set(inode, new_bh);
|
||||
@@ -2070,19 +2073,11 @@ inserted:
|
||||
|
||||
goal = ext4_group_first_block_no(sb,
|
||||
EXT4_I(inode)->i_block_group);
|
||||
|
||||
/* non-extent files can't have physical blocks past 2^32 */
|
||||
if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
|
||||
goal = goal & EXT4_MAX_BLOCK_FILE_PHYS;
|
||||
|
||||
block = ext4_new_meta_blocks(handle, inode, goal, 0,
|
||||
NULL, &error);
|
||||
if (error)
|
||||
goto cleanup;
|
||||
|
||||
if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
|
||||
BUG_ON(block > EXT4_MAX_BLOCK_FILE_PHYS);
|
||||
|
||||
ea_idebug(inode, "creating block %llu",
|
||||
(unsigned long long)block);
|
||||
|
||||
@@ -2554,7 +2549,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
|
||||
|
||||
is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
|
||||
bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
|
||||
buffer = kmalloc(value_size, GFP_NOFS);
|
||||
buffer = kvmalloc(value_size, GFP_NOFS);
|
||||
b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
|
||||
if (!is || !bs || !buffer || !b_entry_name) {
|
||||
error = -ENOMEM;
|
||||
@@ -2606,7 +2601,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
|
||||
error = 0;
|
||||
out:
|
||||
kfree(b_entry_name);
|
||||
kfree(buffer);
|
||||
kvfree(buffer);
|
||||
if (is)
|
||||
brelse(is->iloc.bh);
|
||||
if (bs)
|
||||
|
||||
@@ -456,16 +456,16 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
/* panic? */
|
||||
return -EIO;
|
||||
|
||||
res = -EIO;
|
||||
if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
|
||||
return -EIO;
|
||||
goto out;
|
||||
fd.search_key->cat = HFS_I(main_inode)->cat_key;
|
||||
if (hfs_brec_find(&fd))
|
||||
/* panic? */
|
||||
goto out;
|
||||
|
||||
if (S_ISDIR(main_inode->i_mode)) {
|
||||
if (fd.entrylength < sizeof(struct hfs_cat_dir))
|
||||
/* panic? */;
|
||||
goto out;
|
||||
hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
|
||||
sizeof(struct hfs_cat_dir));
|
||||
if (rec.type != HFS_CDR_DIR ||
|
||||
@@ -478,6 +478,8 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
|
||||
sizeof(struct hfs_cat_dir));
|
||||
} else if (HFS_IS_RSRC(inode)) {
|
||||
if (fd.entrylength < sizeof(struct hfs_cat_file))
|
||||
goto out;
|
||||
hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
|
||||
sizeof(struct hfs_cat_file));
|
||||
hfs_inode_write_fork(inode, rec.file.RExtRec,
|
||||
@@ -486,7 +488,7 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
sizeof(struct hfs_cat_file));
|
||||
} else {
|
||||
if (fd.entrylength < sizeof(struct hfs_cat_file))
|
||||
/* panic? */;
|
||||
goto out;
|
||||
hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
|
||||
sizeof(struct hfs_cat_file));
|
||||
if (rec.type != HFS_CDR_FIL ||
|
||||
@@ -503,9 +505,10 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
||||
hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
|
||||
sizeof(struct hfs_cat_file));
|
||||
}
|
||||
res = 0;
|
||||
out:
|
||||
hfs_find_exit(&fd);
|
||||
return 0;
|
||||
return res;
|
||||
}
|
||||
|
||||
static struct dentry *hfs_file_lookup(struct inode *dir, struct dentry *dentry,
|
||||
|
||||
@@ -198,6 +198,8 @@ struct hfsplus_sb_info {
|
||||
#define HFSPLUS_SB_HFSX 3
|
||||
#define HFSPLUS_SB_CASEFOLD 4
|
||||
#define HFSPLUS_SB_NOBARRIER 5
|
||||
#define HFSPLUS_SB_UID 6
|
||||
#define HFSPLUS_SB_GID 7
|
||||
|
||||
static inline struct hfsplus_sb_info *HFSPLUS_SB(struct super_block *sb)
|
||||
{
|
||||
|
||||
@@ -190,11 +190,11 @@ static void hfsplus_get_perms(struct inode *inode,
|
||||
mode = be16_to_cpu(perms->mode);
|
||||
|
||||
i_uid_write(inode, be32_to_cpu(perms->owner));
|
||||
if (!i_uid_read(inode) && !mode)
|
||||
if ((test_bit(HFSPLUS_SB_UID, &sbi->flags)) || (!i_uid_read(inode) && !mode))
|
||||
inode->i_uid = sbi->uid;
|
||||
|
||||
i_gid_write(inode, be32_to_cpu(perms->group));
|
||||
if (!i_gid_read(inode) && !mode)
|
||||
if ((test_bit(HFSPLUS_SB_GID, &sbi->flags)) || (!i_gid_read(inode) && !mode))
|
||||
inode->i_gid = sbi->gid;
|
||||
|
||||
if (dir) {
|
||||
@@ -509,8 +509,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
|
||||
if (type == HFSPLUS_FOLDER) {
|
||||
struct hfsplus_cat_folder *folder = &entry.folder;
|
||||
|
||||
if (fd->entrylength < sizeof(struct hfsplus_cat_folder))
|
||||
/* panic? */;
|
||||
WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_folder));
|
||||
hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
|
||||
sizeof(struct hfsplus_cat_folder));
|
||||
hfsplus_get_perms(inode, &folder->permissions, 1);
|
||||
@@ -530,8 +529,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
|
||||
} else if (type == HFSPLUS_FILE) {
|
||||
struct hfsplus_cat_file *file = &entry.file;
|
||||
|
||||
if (fd->entrylength < sizeof(struct hfsplus_cat_file))
|
||||
/* panic? */;
|
||||
WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_file));
|
||||
hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
|
||||
sizeof(struct hfsplus_cat_file));
|
||||
|
||||
@@ -588,8 +586,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
|
||||
if (S_ISDIR(main_inode->i_mode)) {
|
||||
struct hfsplus_cat_folder *folder = &entry.folder;
|
||||
|
||||
if (fd.entrylength < sizeof(struct hfsplus_cat_folder))
|
||||
/* panic? */;
|
||||
WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_folder));
|
||||
hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
|
||||
sizeof(struct hfsplus_cat_folder));
|
||||
/* simple node checks? */
|
||||
@@ -614,8 +611,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
|
||||
} else {
|
||||
struct hfsplus_cat_file *file = &entry.file;
|
||||
|
||||
if (fd.entrylength < sizeof(struct hfsplus_cat_file))
|
||||
/* panic? */;
|
||||
WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_file));
|
||||
hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
|
||||
sizeof(struct hfsplus_cat_file));
|
||||
hfsplus_inode_write_fork(inode, &file->data_fork);
|
||||
|
||||
@@ -140,6 +140,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
|
||||
if (!uid_valid(sbi->uid)) {
|
||||
pr_err("invalid uid specified\n");
|
||||
return 0;
|
||||
} else {
|
||||
set_bit(HFSPLUS_SB_UID, &sbi->flags);
|
||||
}
|
||||
break;
|
||||
case opt_gid:
|
||||
@@ -151,6 +153,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
|
||||
if (!gid_valid(sbi->gid)) {
|
||||
pr_err("invalid gid specified\n");
|
||||
return 0;
|
||||
} else {
|
||||
set_bit(HFSPLUS_SB_GID, &sbi->flags);
|
||||
}
|
||||
break;
|
||||
case opt_part:
|
||||
|
||||
@@ -319,7 +319,8 @@ int ksmbd_decode_ntlmssp_auth_blob(struct authenticate_message *authblob,
|
||||
dn_off = le32_to_cpu(authblob->DomainName.BufferOffset);
|
||||
dn_len = le16_to_cpu(authblob->DomainName.Length);
|
||||
|
||||
if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len)
|
||||
if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len ||
|
||||
nt_len < CIFS_ENCPWD_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
/* TODO : use domain name that imported from configuration file */
|
||||
|
||||
@@ -310,9 +310,12 @@ int ksmbd_conn_handler_loop(void *p)
|
||||
|
||||
/* 4 for rfc1002 length field */
|
||||
size = pdu_size + 4;
|
||||
conn->request_buf = kvmalloc(size, GFP_KERNEL);
|
||||
conn->request_buf = kvmalloc(size,
|
||||
GFP_KERNEL |
|
||||
__GFP_NOWARN |
|
||||
__GFP_NORETRY);
|
||||
if (!conn->request_buf)
|
||||
continue;
|
||||
break;
|
||||
|
||||
memcpy(conn->request_buf, hdr_buf, sizeof(hdr_buf));
|
||||
if (!ksmbd_smb_request(conn))
|
||||
|
||||
@@ -295,6 +295,7 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
|
||||
struct msghdr ksmbd_msg;
|
||||
struct kvec *iov;
|
||||
struct ksmbd_conn *conn = KSMBD_TRANS(t)->conn;
|
||||
int max_retry = 2;
|
||||
|
||||
iov = get_conn_iovec(t, nr_segs);
|
||||
if (!iov)
|
||||
@@ -321,9 +322,11 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
|
||||
} else if (conn->status == KSMBD_SESS_NEED_RECONNECT) {
|
||||
total_read = -EAGAIN;
|
||||
break;
|
||||
} else if (length == -ERESTARTSYS || length == -EAGAIN) {
|
||||
} else if ((length == -ERESTARTSYS || length == -EAGAIN) &&
|
||||
max_retry) {
|
||||
usleep_range(1000, 2000);
|
||||
length = 0;
|
||||
max_retry--;
|
||||
continue;
|
||||
} else if (length <= 0) {
|
||||
total_read = -EAGAIN;
|
||||
|
||||
23
fs/locks.c
23
fs/locks.c
@@ -2703,6 +2703,29 @@ int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vfs_cancel_lock);
|
||||
|
||||
/**
|
||||
* vfs_inode_has_locks - are any file locks held on @inode?
|
||||
* @inode: inode to check for locks
|
||||
*
|
||||
* Return true if there are any FL_POSIX or FL_FLOCK locks currently
|
||||
* set on @inode.
|
||||
*/
|
||||
bool vfs_inode_has_locks(struct inode *inode)
|
||||
{
|
||||
struct file_lock_context *ctx;
|
||||
bool ret;
|
||||
|
||||
ctx = smp_load_acquire(&inode->i_flctx);
|
||||
if (!ctx)
|
||||
return false;
|
||||
|
||||
spin_lock(&ctx->flc_lock);
|
||||
ret = !list_empty(&ctx->flc_posix) || !list_empty(&ctx->flc_flock);
|
||||
spin_unlock(&ctx->flc_lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vfs_inode_has_locks);
|
||||
|
||||
#ifdef CONFIG_PROC_FS
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/seq_file.h>
|
||||
|
||||
121
fs/mbcache.c
121
fs/mbcache.c
@@ -90,12 +90,19 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_LIST_HEAD(&entry->e_list);
|
||||
/* One ref for hash, one ref returned */
|
||||
atomic_set(&entry->e_refcnt, 1);
|
||||
/*
|
||||
* We create entry with two references. One reference is kept by the
|
||||
* hash table, the other reference is used to protect us from
|
||||
* mb_cache_entry_delete_or_get() until the entry is fully setup. This
|
||||
* avoids nesting of cache->c_list_lock into hash table bit locks which
|
||||
* is problematic for RT.
|
||||
*/
|
||||
atomic_set(&entry->e_refcnt, 2);
|
||||
entry->e_key = key;
|
||||
entry->e_value = value;
|
||||
entry->e_reusable = reusable;
|
||||
entry->e_referenced = 0;
|
||||
entry->e_flags = 0;
|
||||
if (reusable)
|
||||
set_bit(MBE_REUSABLE_B, &entry->e_flags);
|
||||
head = mb_cache_entry_head(cache, key);
|
||||
hlist_bl_lock(head);
|
||||
hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
|
||||
@@ -107,20 +114,24 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
|
||||
}
|
||||
hlist_bl_add_head(&entry->e_hash_list, head);
|
||||
hlist_bl_unlock(head);
|
||||
|
||||
spin_lock(&cache->c_list_lock);
|
||||
list_add_tail(&entry->e_list, &cache->c_list);
|
||||
/* Grab ref for LRU list */
|
||||
atomic_inc(&entry->e_refcnt);
|
||||
cache->c_entry_count++;
|
||||
spin_unlock(&cache->c_list_lock);
|
||||
mb_cache_entry_put(cache, entry);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(mb_cache_entry_create);
|
||||
|
||||
void __mb_cache_entry_free(struct mb_cache_entry *entry)
|
||||
void __mb_cache_entry_free(struct mb_cache *cache, struct mb_cache_entry *entry)
|
||||
{
|
||||
struct hlist_bl_head *head;
|
||||
|
||||
head = mb_cache_entry_head(cache, entry->e_key);
|
||||
hlist_bl_lock(head);
|
||||
hlist_bl_del(&entry->e_hash_list);
|
||||
hlist_bl_unlock(head);
|
||||
kmem_cache_free(mb_entry_cache, entry);
|
||||
}
|
||||
EXPORT_SYMBOL(__mb_cache_entry_free);
|
||||
@@ -134,7 +145,7 @@ EXPORT_SYMBOL(__mb_cache_entry_free);
|
||||
*/
|
||||
void mb_cache_entry_wait_unused(struct mb_cache_entry *entry)
|
||||
{
|
||||
wait_var_event(&entry->e_refcnt, atomic_read(&entry->e_refcnt) <= 3);
|
||||
wait_var_event(&entry->e_refcnt, atomic_read(&entry->e_refcnt) <= 2);
|
||||
}
|
||||
EXPORT_SYMBOL(mb_cache_entry_wait_unused);
|
||||
|
||||
@@ -155,10 +166,10 @@ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
|
||||
while (node) {
|
||||
entry = hlist_bl_entry(node, struct mb_cache_entry,
|
||||
e_hash_list);
|
||||
if (entry->e_key == key && entry->e_reusable) {
|
||||
atomic_inc(&entry->e_refcnt);
|
||||
if (entry->e_key == key &&
|
||||
test_bit(MBE_REUSABLE_B, &entry->e_flags) &&
|
||||
atomic_inc_not_zero(&entry->e_refcnt))
|
||||
goto out;
|
||||
}
|
||||
node = node->next;
|
||||
}
|
||||
entry = NULL;
|
||||
@@ -218,10 +229,9 @@ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
|
||||
head = mb_cache_entry_head(cache, key);
|
||||
hlist_bl_lock(head);
|
||||
hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
|
||||
if (entry->e_key == key && entry->e_value == value) {
|
||||
atomic_inc(&entry->e_refcnt);
|
||||
if (entry->e_key == key && entry->e_value == value &&
|
||||
atomic_inc_not_zero(&entry->e_refcnt))
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
entry = NULL;
|
||||
out:
|
||||
@@ -281,37 +291,25 @@ EXPORT_SYMBOL(mb_cache_entry_delete);
|
||||
struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
|
||||
u32 key, u64 value)
|
||||
{
|
||||
struct hlist_bl_node *node;
|
||||
struct hlist_bl_head *head;
|
||||
struct mb_cache_entry *entry;
|
||||
|
||||
head = mb_cache_entry_head(cache, key);
|
||||
hlist_bl_lock(head);
|
||||
hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
|
||||
if (entry->e_key == key && entry->e_value == value) {
|
||||
if (atomic_read(&entry->e_refcnt) > 2) {
|
||||
atomic_inc(&entry->e_refcnt);
|
||||
hlist_bl_unlock(head);
|
||||
return entry;
|
||||
}
|
||||
/* We keep hash list reference to keep entry alive */
|
||||
hlist_bl_del_init(&entry->e_hash_list);
|
||||
hlist_bl_unlock(head);
|
||||
spin_lock(&cache->c_list_lock);
|
||||
if (!list_empty(&entry->e_list)) {
|
||||
list_del_init(&entry->e_list);
|
||||
if (!WARN_ONCE(cache->c_entry_count == 0,
|
||||
"mbcache: attempt to decrement c_entry_count past zero"))
|
||||
cache->c_entry_count--;
|
||||
atomic_dec(&entry->e_refcnt);
|
||||
}
|
||||
spin_unlock(&cache->c_list_lock);
|
||||
mb_cache_entry_put(cache, entry);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
hlist_bl_unlock(head);
|
||||
entry = mb_cache_entry_get(cache, key, value);
|
||||
if (!entry)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* Drop the ref we got from mb_cache_entry_get() and the initial hash
|
||||
* ref if we are the last user
|
||||
*/
|
||||
if (atomic_cmpxchg(&entry->e_refcnt, 2, 0) != 2)
|
||||
return entry;
|
||||
|
||||
spin_lock(&cache->c_list_lock);
|
||||
if (!list_empty(&entry->e_list))
|
||||
list_del_init(&entry->e_list);
|
||||
cache->c_entry_count--;
|
||||
spin_unlock(&cache->c_list_lock);
|
||||
__mb_cache_entry_free(cache, entry);
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(mb_cache_entry_delete_or_get);
|
||||
@@ -325,7 +323,7 @@ EXPORT_SYMBOL(mb_cache_entry_delete_or_get);
|
||||
void mb_cache_entry_touch(struct mb_cache *cache,
|
||||
struct mb_cache_entry *entry)
|
||||
{
|
||||
entry->e_referenced = 1;
|
||||
set_bit(MBE_REFERENCED_B, &entry->e_flags);
|
||||
}
|
||||
EXPORT_SYMBOL(mb_cache_entry_touch);
|
||||
|
||||
@@ -343,42 +341,24 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
|
||||
unsigned long nr_to_scan)
|
||||
{
|
||||
struct mb_cache_entry *entry;
|
||||
struct hlist_bl_head *head;
|
||||
unsigned long shrunk = 0;
|
||||
|
||||
spin_lock(&cache->c_list_lock);
|
||||
while (nr_to_scan-- && !list_empty(&cache->c_list)) {
|
||||
entry = list_first_entry(&cache->c_list,
|
||||
struct mb_cache_entry, e_list);
|
||||
if (entry->e_referenced || atomic_read(&entry->e_refcnt) > 2) {
|
||||
entry->e_referenced = 0;
|
||||
/* Drop initial hash reference if there is no user */
|
||||
if (test_bit(MBE_REFERENCED_B, &entry->e_flags) ||
|
||||
atomic_cmpxchg(&entry->e_refcnt, 1, 0) != 1) {
|
||||
clear_bit(MBE_REFERENCED_B, &entry->e_flags);
|
||||
list_move_tail(&entry->e_list, &cache->c_list);
|
||||
continue;
|
||||
}
|
||||
list_del_init(&entry->e_list);
|
||||
cache->c_entry_count--;
|
||||
/*
|
||||
* We keep LRU list reference so that entry doesn't go away
|
||||
* from under us.
|
||||
*/
|
||||
spin_unlock(&cache->c_list_lock);
|
||||
head = mb_cache_entry_head(cache, entry->e_key);
|
||||
hlist_bl_lock(head);
|
||||
/* Now a reliable check if the entry didn't get used... */
|
||||
if (atomic_read(&entry->e_refcnt) > 2) {
|
||||
hlist_bl_unlock(head);
|
||||
spin_lock(&cache->c_list_lock);
|
||||
list_add_tail(&entry->e_list, &cache->c_list);
|
||||
cache->c_entry_count++;
|
||||
continue;
|
||||
}
|
||||
if (!hlist_bl_unhashed(&entry->e_hash_list)) {
|
||||
hlist_bl_del_init(&entry->e_hash_list);
|
||||
atomic_dec(&entry->e_refcnt);
|
||||
}
|
||||
hlist_bl_unlock(head);
|
||||
if (mb_cache_entry_put(cache, entry))
|
||||
shrunk++;
|
||||
__mb_cache_entry_free(cache, entry);
|
||||
shrunk++;
|
||||
cond_resched();
|
||||
spin_lock(&cache->c_list_lock);
|
||||
}
|
||||
@@ -470,11 +450,6 @@ void mb_cache_destroy(struct mb_cache *cache)
|
||||
* point.
|
||||
*/
|
||||
list_for_each_entry_safe(entry, next, &cache->c_list, e_list) {
|
||||
if (!hlist_bl_unhashed(&entry->e_hash_list)) {
|
||||
hlist_bl_del_init(&entry->e_hash_list);
|
||||
atomic_dec(&entry->e_refcnt);
|
||||
} else
|
||||
WARN_ON(1);
|
||||
list_del(&entry->e_list);
|
||||
WARN_ON(atomic_read(&entry->e_refcnt) != 1);
|
||||
mb_cache_entry_put(cache, entry);
|
||||
|
||||
@@ -3514,6 +3514,17 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
|
||||
case nfserr_noent:
|
||||
xdr_truncate_encode(xdr, start_offset);
|
||||
goto skip_entry;
|
||||
case nfserr_jukebox:
|
||||
/*
|
||||
* The pseudoroot should only display dentries that lead to
|
||||
* exports. If we get EJUKEBOX here, then we can't tell whether
|
||||
* this entry should be included. Just fail the whole READDIR
|
||||
* with NFS4ERR_DELAY in that case, and hope that the situation
|
||||
* will resolve itself by the client's next attempt.
|
||||
*/
|
||||
if (cd->rd_fhp->fh_export->ex_flags & NFSEXP_V4ROOT)
|
||||
goto fail;
|
||||
fallthrough;
|
||||
default:
|
||||
/*
|
||||
* If the client requested the RDATTR_ERROR attribute,
|
||||
|
||||
@@ -425,8 +425,8 @@ static void nfsd_shutdown_net(struct net *net)
|
||||
{
|
||||
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
|
||||
|
||||
nfsd_file_cache_shutdown_net(net);
|
||||
nfs4_state_shutdown_net(net);
|
||||
nfsd_file_cache_shutdown_net(net);
|
||||
if (nn->lockd_up) {
|
||||
lockd_down(net);
|
||||
nn->lockd_up = false;
|
||||
|
||||
@@ -101,6 +101,10 @@ int attr_load_runs(struct ATTRIB *attr, struct ntfs_inode *ni,
|
||||
|
||||
asize = le32_to_cpu(attr->size);
|
||||
run_off = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (run_off > asize)
|
||||
return -EINVAL;
|
||||
|
||||
err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn,
|
||||
vcn ? *vcn : svcn, Add2Ptr(attr, run_off),
|
||||
asize - run_off);
|
||||
@@ -1142,6 +1146,11 @@ int attr_load_runs_vcn(struct ntfs_inode *ni, enum ATTR_TYPE type,
|
||||
CLST svcn, evcn;
|
||||
u16 ro;
|
||||
|
||||
if (!ni) {
|
||||
/* Is record corrupted? */
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
attr = ni_find_attr(ni, NULL, NULL, type, name, name_len, &vcn, NULL);
|
||||
if (!attr) {
|
||||
/* Is record corrupted? */
|
||||
@@ -1157,6 +1166,10 @@ int attr_load_runs_vcn(struct ntfs_inode *ni, enum ATTR_TYPE type,
|
||||
}
|
||||
|
||||
ro = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (ro > le32_to_cpu(attr->size))
|
||||
return -EINVAL;
|
||||
|
||||
err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn, svcn,
|
||||
Add2Ptr(attr, ro), le32_to_cpu(attr->size) - ro);
|
||||
if (err < 0)
|
||||
@@ -1832,6 +1845,11 @@ int attr_collapse_range(struct ntfs_inode *ni, u64 vbo, u64 bytes)
|
||||
u16 le_sz;
|
||||
u16 roff = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (roff > le32_to_cpu(attr->size)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
run_unpack_ex(RUN_DEALLOCATE, sbi, ni->mi.rno, svcn,
|
||||
evcn1 - 1, svcn, Add2Ptr(attr, roff),
|
||||
le32_to_cpu(attr->size) - roff);
|
||||
|
||||
@@ -68,6 +68,11 @@ int ntfs_load_attr_list(struct ntfs_inode *ni, struct ATTRIB *attr)
|
||||
|
||||
run_init(&ni->attr_list.run);
|
||||
|
||||
if (run_off > le32_to_cpu(attr->size)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = run_unpack_ex(&ni->attr_list.run, ni->mi.sbi, ni->mi.rno,
|
||||
0, le64_to_cpu(attr->nres.evcn), 0,
|
||||
Add2Ptr(attr, run_off),
|
||||
|
||||
@@ -666,7 +666,7 @@ int wnd_init(struct wnd_bitmap *wnd, struct super_block *sb, size_t nbits)
|
||||
if (!wnd->bits_last)
|
||||
wnd->bits_last = wbits;
|
||||
|
||||
wnd->free_bits = kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS);
|
||||
wnd->free_bits = kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN);
|
||||
if (!wnd->free_bits)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -488,10 +488,10 @@ static int ntfs_truncate(struct inode *inode, loff_t new_size)
|
||||
|
||||
new_valid = ntfs_up_block(sb, min_t(u64, ni->i_valid, new_size));
|
||||
|
||||
ni_lock(ni);
|
||||
|
||||
truncate_setsize(inode, new_size);
|
||||
|
||||
ni_lock(ni);
|
||||
|
||||
down_write(&ni->file.run_lock);
|
||||
err = attr_set_size(ni, ATTR_DATA, NULL, 0, &ni->file.run, new_size,
|
||||
&new_valid, ni->mi.sbi->options->prealloc, NULL);
|
||||
|
||||
@@ -567,6 +567,12 @@ static int ni_repack(struct ntfs_inode *ni)
|
||||
}
|
||||
|
||||
roff = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (roff > le32_to_cpu(attr->size)) {
|
||||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
err = run_unpack(&run, sbi, ni->mi.rno, svcn, evcn, svcn,
|
||||
Add2Ptr(attr, roff),
|
||||
le32_to_cpu(attr->size) - roff);
|
||||
@@ -1541,6 +1547,9 @@ int ni_delete_all(struct ntfs_inode *ni)
|
||||
asize = le32_to_cpu(attr->size);
|
||||
roff = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (roff > asize)
|
||||
return -EINVAL;
|
||||
|
||||
/* run==1 means unpack and deallocate. */
|
||||
run_unpack_ex(RUN_DEALLOCATE, sbi, ni->mi.rno, svcn, evcn, svcn,
|
||||
Add2Ptr(attr, roff), asize - roff);
|
||||
@@ -2242,6 +2251,11 @@ remove_wof:
|
||||
asize = le32_to_cpu(attr->size);
|
||||
roff = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (roff > asize) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*run==1 Means unpack and deallocate. */
|
||||
run_unpack_ex(RUN_DEALLOCATE, sbi, ni->mi.rno, svcn, evcn, svcn,
|
||||
Add2Ptr(attr, roff), asize - roff);
|
||||
|
||||
@@ -1132,7 +1132,7 @@ static int read_log_page(struct ntfs_log *log, u32 vbo,
|
||||
return -EINVAL;
|
||||
|
||||
if (!*buffer) {
|
||||
to_free = kmalloc(bytes, GFP_NOFS);
|
||||
to_free = kmalloc(log->page_size, GFP_NOFS);
|
||||
if (!to_free)
|
||||
return -ENOMEM;
|
||||
*buffer = to_free;
|
||||
@@ -1180,10 +1180,7 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
|
||||
struct restart_info *info)
|
||||
{
|
||||
u32 skip, vbo;
|
||||
struct RESTART_HDR *r_page = kmalloc(DefaultLogPageSize, GFP_NOFS);
|
||||
|
||||
if (!r_page)
|
||||
return -ENOMEM;
|
||||
struct RESTART_HDR *r_page = NULL;
|
||||
|
||||
/* Determine which restart area we are looking for. */
|
||||
if (first) {
|
||||
@@ -1197,7 +1194,6 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
|
||||
/* Loop continuously until we succeed. */
|
||||
for (; vbo < l_size; vbo = 2 * vbo + skip, skip = 0) {
|
||||
bool usa_error;
|
||||
u32 sys_page_size;
|
||||
bool brst, bchk;
|
||||
struct RESTART_AREA *ra;
|
||||
|
||||
@@ -1251,24 +1247,6 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
|
||||
goto check_result;
|
||||
}
|
||||
|
||||
/* Read the entire restart area. */
|
||||
sys_page_size = le32_to_cpu(r_page->sys_page_size);
|
||||
if (DefaultLogPageSize != sys_page_size) {
|
||||
kfree(r_page);
|
||||
r_page = kzalloc(sys_page_size, GFP_NOFS);
|
||||
if (!r_page)
|
||||
return -ENOMEM;
|
||||
|
||||
if (read_log_page(log, vbo,
|
||||
(struct RECORD_PAGE_HDR **)&r_page,
|
||||
&usa_error)) {
|
||||
/* Ignore any errors. */
|
||||
kfree(r_page);
|
||||
r_page = NULL;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (is_client_area_valid(r_page, usa_error)) {
|
||||
info->valid_page = true;
|
||||
ra = Add2Ptr(r_page, le16_to_cpu(r_page->ra_off));
|
||||
@@ -2727,6 +2705,9 @@ static inline bool check_attr(const struct MFT_REC *rec,
|
||||
return false;
|
||||
}
|
||||
|
||||
if (run_off > asize)
|
||||
return false;
|
||||
|
||||
if (run_unpack(NULL, sbi, 0, svcn, evcn, svcn,
|
||||
Add2Ptr(attr, run_off), asize - run_off) < 0) {
|
||||
return false;
|
||||
@@ -4769,6 +4750,12 @@ fake_attr:
|
||||
u16 roff = le16_to_cpu(attr->nres.run_off);
|
||||
CLST svcn = le64_to_cpu(attr->nres.svcn);
|
||||
|
||||
if (roff > t32) {
|
||||
kfree(oa->attr);
|
||||
oa->attr = NULL;
|
||||
goto fake_attr;
|
||||
}
|
||||
|
||||
err = run_unpack(&oa->run0, sbi, inode->i_ino, svcn,
|
||||
le64_to_cpu(attr->nres.evcn), svcn,
|
||||
Add2Ptr(attr, roff), t32 - roff);
|
||||
|
||||
@@ -1878,9 +1878,10 @@ int ntfs_security_init(struct ntfs_sb_info *sbi)
|
||||
goto out;
|
||||
}
|
||||
|
||||
root_sdh = resident_data(attr);
|
||||
root_sdh = resident_data_ex(attr, sizeof(struct INDEX_ROOT));
|
||||
if (root_sdh->type != ATTR_ZERO ||
|
||||
root_sdh->rule != NTFS_COLLATION_TYPE_SECURITY_HASH) {
|
||||
root_sdh->rule != NTFS_COLLATION_TYPE_SECURITY_HASH ||
|
||||
offsetof(struct INDEX_ROOT, ihdr) + root_sdh->ihdr.used > attr->res.data_size) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -1896,9 +1897,10 @@ int ntfs_security_init(struct ntfs_sb_info *sbi)
|
||||
goto out;
|
||||
}
|
||||
|
||||
root_sii = resident_data(attr);
|
||||
root_sii = resident_data_ex(attr, sizeof(struct INDEX_ROOT));
|
||||
if (root_sii->type != ATTR_ZERO ||
|
||||
root_sii->rule != NTFS_COLLATION_TYPE_UINT) {
|
||||
root_sii->rule != NTFS_COLLATION_TYPE_UINT ||
|
||||
offsetof(struct INDEX_ROOT, ihdr) + root_sii->ihdr.used > attr->res.data_size) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -1017,6 +1017,12 @@ ok:
|
||||
err = 0;
|
||||
}
|
||||
|
||||
/* check for index header length */
|
||||
if (offsetof(struct INDEX_BUFFER, ihdr) + ib->ihdr.used > bytes) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
in->index = ib;
|
||||
*node = in;
|
||||
|
||||
|
||||
@@ -129,6 +129,9 @@ next_attr:
|
||||
rsize = attr->non_res ? 0 : le32_to_cpu(attr->res.data_size);
|
||||
asize = le32_to_cpu(attr->size);
|
||||
|
||||
if (le16_to_cpu(attr->name_off) + attr->name_len > asize)
|
||||
goto out;
|
||||
|
||||
switch (attr->type) {
|
||||
case ATTR_STD:
|
||||
if (attr->non_res ||
|
||||
@@ -364,7 +367,13 @@ next_attr:
|
||||
attr_unpack_run:
|
||||
roff = le16_to_cpu(attr->nres.run_off);
|
||||
|
||||
if (roff > asize) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
t64 = le64_to_cpu(attr->nres.svcn);
|
||||
|
||||
err = run_unpack_ex(run, sbi, ino, t64, le64_to_cpu(attr->nres.evcn),
|
||||
t64, Add2Ptr(attr, roff), asize - roff);
|
||||
if (err < 0)
|
||||
|
||||
@@ -220,6 +220,11 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (off + asize < off) {
|
||||
/* overflow check */
|
||||
return NULL;
|
||||
}
|
||||
|
||||
attr = Add2Ptr(attr, asize);
|
||||
off += asize;
|
||||
}
|
||||
@@ -260,6 +265,11 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
|
||||
if (t16 + t32 > asize)
|
||||
return NULL;
|
||||
|
||||
if (attr->name_len &&
|
||||
le16_to_cpu(attr->name_off) + sizeof(short) * attr->name_len > t16) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return attr;
|
||||
}
|
||||
|
||||
|
||||
@@ -789,7 +789,7 @@ static int ntfs_init_from_boot(struct super_block *sb, u32 sector_size,
|
||||
: (u32)boot->record_size
|
||||
<< sbi->cluster_bits;
|
||||
|
||||
if (record_size > MAXIMUM_BYTES_PER_MFT)
|
||||
if (record_size > MAXIMUM_BYTES_PER_MFT || record_size < SECTOR_SIZE)
|
||||
goto out;
|
||||
|
||||
sbi->record_bits = blksize_bits(record_size);
|
||||
@@ -1136,7 +1136,7 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
goto put_inode_out;
|
||||
}
|
||||
bytes = inode->i_size;
|
||||
sbi->def_table = t = kmalloc(bytes, GFP_NOFS);
|
||||
sbi->def_table = t = kmalloc(bytes, GFP_NOFS | __GFP_NOWARN);
|
||||
if (!t) {
|
||||
err = -ENOMEM;
|
||||
goto put_inode_out;
|
||||
@@ -1255,9 +1255,9 @@ load_root:
|
||||
ref.low = cpu_to_le32(MFT_REC_ROOT);
|
||||
ref.seq = cpu_to_le16(MFT_REC_ROOT);
|
||||
inode = ntfs_iget5(sb, &ref, &NAME_ROOT);
|
||||
if (IS_ERR(inode)) {
|
||||
if (IS_ERR(inode) || !inode->i_op) {
|
||||
ntfs_err(sb, "Failed to load root.");
|
||||
err = PTR_ERR(inode);
|
||||
err = IS_ERR(inode) ? PTR_ERR(inode) : -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -1276,6 +1276,7 @@ out:
|
||||
* Free resources here.
|
||||
* ntfs_fs_free will be called with fc->s_fs_info = NULL
|
||||
*/
|
||||
put_mount_options(sbi->options);
|
||||
put_ntfs(sbi);
|
||||
sb->s_fs_info = NULL;
|
||||
|
||||
|
||||
@@ -589,29 +589,43 @@ static int ovl_create_or_link(struct dentry *dentry, struct inode *inode,
|
||||
goto out_revert_creds;
|
||||
}
|
||||
|
||||
err = -ENOMEM;
|
||||
override_cred = prepare_creds();
|
||||
if (override_cred) {
|
||||
if (!attr->hardlink) {
|
||||
err = -ENOMEM;
|
||||
override_cred = prepare_creds();
|
||||
if (!override_cred)
|
||||
goto out_revert_creds;
|
||||
/*
|
||||
* In the creation cases(create, mkdir, mknod, symlink),
|
||||
* ovl should transfer current's fs{u,g}id to underlying
|
||||
* fs. Because underlying fs want to initialize its new
|
||||
* inode owner using current's fs{u,g}id. And in this
|
||||
* case, the @inode is a new inode that is initialized
|
||||
* in inode_init_owner() to current's fs{u,g}id. So use
|
||||
* the inode's i_{u,g}id to override the cred's fs{u,g}id.
|
||||
*
|
||||
* But in the other hardlink case, ovl_link() does not
|
||||
* create a new inode, so just use the ovl mounter's
|
||||
* fs{u,g}id.
|
||||
*/
|
||||
override_cred->fsuid = inode->i_uid;
|
||||
override_cred->fsgid = inode->i_gid;
|
||||
if (!attr->hardlink) {
|
||||
err = security_dentry_create_files_as(dentry,
|
||||
attr->mode, &dentry->d_name,
|
||||
old_cred ? old_cred : current_cred(),
|
||||
override_cred);
|
||||
if (err) {
|
||||
put_cred(override_cred);
|
||||
goto out_revert_creds;
|
||||
}
|
||||
err = security_dentry_create_files_as(dentry,
|
||||
attr->mode, &dentry->d_name,
|
||||
old_cred ? old_cred : current_cred(),
|
||||
override_cred);
|
||||
if (err) {
|
||||
put_cred(override_cred);
|
||||
goto out_revert_creds;
|
||||
}
|
||||
hold_cred = override_creds(override_cred);
|
||||
put_cred(override_cred);
|
||||
|
||||
if (!ovl_dentry_is_whiteout(dentry))
|
||||
err = ovl_create_upper(dentry, inode, attr);
|
||||
else
|
||||
err = ovl_create_over_whiteout(dentry, inode, attr);
|
||||
}
|
||||
|
||||
if (!ovl_dentry_is_whiteout(dentry))
|
||||
err = ovl_create_upper(dentry, inode, attr);
|
||||
else
|
||||
err = ovl_create_over_whiteout(dentry, inode, attr);
|
||||
|
||||
out_revert_creds:
|
||||
ovl_revert_creds(dentry->d_sb, old_cred ?: hold_cred);
|
||||
if (old_cred && hold_cred)
|
||||
|
||||
@@ -244,7 +244,7 @@ static int propagate_one(struct mount *m)
|
||||
}
|
||||
do {
|
||||
struct mount *parent = last_source->mnt_parent;
|
||||
if (last_source == first_source)
|
||||
if (peers(last_source, first_source))
|
||||
break;
|
||||
done = parent->mnt_master == p;
|
||||
if (done && peers(n, parent))
|
||||
|
||||
@@ -679,7 +679,7 @@ static int ramoops_parse_dt(struct platform_device *pdev,
|
||||
field = value; \
|
||||
}
|
||||
|
||||
parse_u32("mem-type", pdata->record_size, pdata->mem_type);
|
||||
parse_u32("mem-type", pdata->mem_type, pdata->mem_type);
|
||||
parse_u32("record-size", pdata->record_size, 0);
|
||||
parse_u32("console-size", pdata->console_size, 0);
|
||||
parse_u32("ftrace-size", pdata->ftrace_size, 0);
|
||||
|
||||
@@ -761,7 +761,7 @@ static inline int notrace psz_kmsg_write_record(struct psz_context *cxt,
|
||||
/* avoid destroying old data, allocate a new one */
|
||||
len = zone->buffer_size + sizeof(*zone->buffer);
|
||||
zone->oldbuf = zone->buffer;
|
||||
zone->buffer = kzalloc(len, GFP_KERNEL);
|
||||
zone->buffer = kzalloc(len, GFP_ATOMIC);
|
||||
if (!zone->buffer) {
|
||||
zone->buffer = zone->oldbuf;
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -2317,6 +2317,8 @@ static int vfs_setup_quota_inode(struct inode *inode, int type)
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct quota_info *dqopt = sb_dqopt(sb);
|
||||
|
||||
if (is_bad_inode(inode))
|
||||
return -EUCLEAN;
|
||||
if (!S_ISREG(inode->i_mode))
|
||||
return -EACCES;
|
||||
if (IS_RDONLY(inode))
|
||||
|
||||
@@ -599,7 +599,7 @@ static void udf_do_extend_final_block(struct inode *inode,
|
||||
*/
|
||||
if (new_elen <= (last_ext->extLength & UDF_EXTENT_LENGTH_MASK))
|
||||
return;
|
||||
added_bytes = (last_ext->extLength & UDF_EXTENT_LENGTH_MASK) - new_elen;
|
||||
added_bytes = new_elen - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
|
||||
last_ext->extLength += added_bytes;
|
||||
UDF_I(inode)->i_lenExtents += added_bytes;
|
||||
|
||||
|
||||
Reference in New Issue
Block a user