1
0
Fork 0

New code for 5.5:

- Fill out the build string
 - Prevent inode fork extent count overflows
 - Refactor the allocator to reduce long tail latency
 - Rework incore log locking a little to reduce spinning
 - Break up the xfs_iomap_begin functions into smaller more cohesive
 parts
 - Fix allocation alignment being dropped too early when the allocation
 request is for more blocks than an AG is large
 - Other small cleanups
 - Clean up file buftarg retrieval helpers
 - Hoist the resvsp and unresvsp ioctls to the vfs
 - Remove the undocumented biosize mount option, since it has never been
   mentioned as existing or supported on linux
 - Clean up some of the mount option printing and parsing
 - Enhance attr leaf verifier to check block structure
 - Check dirent and attr names for invalid characters before passing them
 to the vfs
 - Refactor open-coded bmbt walking
 - Fix a few places where we return EIO instead of EFSCORRUPTED after
 failing metadata sanity checks
 - Fix a synchronization problem between fallocate and aio dio corrupting
 the file length
 - Clean up various loose ends in the iomap and bmap code
 - Convert to the new mount api
 - Make sure we always log something when returning EFSCORRUPTED
 - Fix some problems where long running scrub loops could trigger soft
 lockup warnings and/or fail to exit due to fatal signals pending
 - Fix various Coverity complaints
 - Remove most of the function pointers from the directory code to reduce
 indirection penalties
 - Ensure that dquots are attached to the inode when performing unwritten
 extent conversion after io
 - Deuglify incore projid and crtime types
 - Fix another AGI/AGF locking order deadlock when renaming
 - Clean up some quota typedefs
 - Remove the FSSETDM ioctls which haven't done anything in 20 years
 - Fix some memory leaks when mounting the log fails
 - Fix an underflow when updating an xattr leaf freemap
 - Remove some trivial wrappers
 - Report metadata corruption as an error, not a (potentially) fatal
 assertion
 - Clean up the dir/attr buffer mapping code
 - Allow fatal signals to kill scrub during parent pointer checks
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEUzaAxoMeQq6m2jMV+H93GTRKtOsFAl3fNjcACgkQ+H93GTRK
 tOv/8w//Y0Oa9Paiy8+iPTChs3/PqeKp307Fj5KONG52haMCakEJFT5+/wpkIAJw
 uUmKiPolwN1ivviIUmIS14ThTJ7NV1jq0G0h/0tC25i/3hoJrGWdzqYJMlvhlqgE
 taHrjCwPTDkhRJ0D5QCrkkHPU7lSdquO5TWxltaqYLhyLIt8SkklD6dN1dHWEPnk
 k0j3TL+VqVJDYyEj1bLwJ0QUb2C3J8ygWnlviF/WxsSeJtJpGoeLEaYXhhsUK0Dt
 aHg70OM6zzFzrJJAtJeBXpgaFsG/Pqbcw4wUWSxEMWjVSJwCSKLuZ5F+p6NcqoEj
 HeLQkaGePoO61YCInk2JKLHIyx7ohqMOt7+Dm0mdbe1pvcKwV9ZcdkqKa8L/Fm6v
 bUP6a2hEpsGy7vLnkYxwYACTLPbGX3uLw8MUr6ZpJ+SpfVLktU4ycpr8dCkJkp6a
 0qOpEeHsBDy74NkMOUa7Qrju7lJ2GiL70qqBwaPe+ubcUa3U/3WAsSekSzXgUwn8
 Fap4r8wn7cUbxymAvO06RlU8YymuulAlyjwdo9gOL/Su/5POldss6dy1YuUtyq19
 CD6NtkHqEUMsTc2cI+H65H44aEeckB1j0D2Grm2uMchAh0GcTSFVNF6jony++B8k
 s2sL2dEw9/9vr0uc1TSVF5ezxaONuyaCXdYXUkkdyq3iNvfpRCg=
 =aACq
 -----END PGP SIGNATURE-----

Merge tag 'xfs-5.5-merge-16' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux

Pull XFS updates from Darrick Wong:
 "For this release, we changed quite a few things.

  Highlights:

   - Fixed some long tail latency problems in the block allocator

   - Removed some long deprecated (and for the past several years no-op)
     mount options and ioctls

   - Strengthened the extended attribute and directory verifiers

   - Audited and fixed all the places where we could return EFSCORRUPTED
     without logging anything

   - Refactored the old SGI space allocation ioctls to make the
     equivalent fallocate calls

   - Fixed a race between fallocate and directio

   - Fixed an integer overflow when files have more than a few
     billion(!) extents

   - Fixed a longstanding bug where quota accounting could be incorrect
     when performing unwritten extent conversion on a freshly mounted fs

   - Fixed various complaints in scrub about soft lockups and
     unresponsiveness to signals

   - De-vtable'd the directory handling code, which should make it
     faster

   - Converted to the new mount api, for better or for worse

   - Cleaned up some memory leaks

  and quite a lot of other smaller fixes and cleanups.

  A more detailed summary:

   - Fill out the build string

   - Prevent inode fork extent count overflows

   - Refactor the allocator to reduce long tail latency

   - Rework incore log locking a little to reduce spinning

   - Break up the xfs_iomap_begin functions into smaller more cohesive
     parts

   - Fix allocation alignment being dropped too early when the
     allocation request is for more blocks than an AG is large

   - Other small cleanups

   - Clean up file buftarg retrieval helpers

   - Hoist the resvsp and unresvsp ioctls to the vfs

   - Remove the undocumented biosize mount option, since it has never
     been mentioned as existing or supported on linux

   - Clean up some of the mount option printing and parsing

   - Enhance attr leaf verifier to check block structure

   - Check dirent and attr names for invalid characters before passing
     them to the vfs

   - Refactor open-coded bmbt walking

   - Fix a few places where we return EIO instead of EFSCORRUPTED after
     failing metadata sanity checks

   - Fix a synchronization problem between fallocate and aio dio
     corrupting the file length

   - Clean up various loose ends in the iomap and bmap code

   - Convert to the new mount api

   - Make sure we always log something when returning EFSCORRUPTED

   - Fix some problems where long running scrub loops could trigger soft
     lockup warnings and/or fail to exit due to fatal signals pending

   - Fix various Coverity complaints

   - Remove most of the function pointers from the directory code to
     reduce indirection penalties

   - Ensure that dquots are attached to the inode when performing
     unwritten extent conversion after io

   - Deuglify incore projid and crtime types

   - Fix another AGI/AGF locking order deadlock when renaming

   - Clean up some quota typedefs

   - Remove the FSSETDM ioctls which haven't done anything in 20 years

   - Fix some memory leaks when mounting the log fails

   - Fix an underflow when updating an xattr leaf freemap

   - Remove some trivial wrappers

   - Report metadata corruption as an error, not a (potentially) fatal
     assertion

   - Clean up the dir/attr buffer mapping code

   - Allow fatal signals to kill scrub during parent pointer checks"

* tag 'xfs-5.5-merge-16' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: (198 commits)
  xfs: allow parent directory scans to be interrupted with fatal signals
  xfs: remove the mappedbno argument to xfs_da_get_buf
  xfs: remove the mappedbno argument to xfs_da_read_buf
  xfs: split xfs_da3_node_read
  xfs: remove the mappedbno argument to xfs_dir3_leafn_read
  xfs: remove the mappedbno argument to xfs_dir3_leaf_read
  xfs: remove the mappedbno argument to xfs_attr3_leaf_read
  xfs: remove the mappedbno argument to xfs_da_reada_buf
  xfs: improve the xfs_dabuf_map calling conventions
  xfs: refactor xfs_dabuf_map
  xfs: simplify mappedbno handling in xfs_da_{get,read}_buf
  xfs: report corruption only as a regular error
  xfs: Remove kmem_zone_free() wrapper
  xfs: Remove kmem_zone_destroy() wrapper
  xfs: Remove slab init wrappers
  xfs: fix attr leaf header freemap.size underflow
  xfs: fix some memory leaks in log recovery
  xfs: fix another missing include
  xfs: remove XFS_IOC_FSSETDM and XFS_IOC_FSSETDM_BY_HANDLE
  xfs: remove duplicated include from xfs_dir2_data.c
  ...
alistair/sunxi64-5.5-dsi
Linus Torvalds 2019-12-02 14:46:22 -08:00
commit 97eeb4d9d7
126 changed files with 5841 additions and 6284 deletions

View File

@ -185,15 +185,27 @@ COMPAT_SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd,
/* handled by some ->ioctl(); always a pointer to int */ /* handled by some ->ioctl(); always a pointer to int */
case FIONREAD: case FIONREAD:
goto found_handler; goto found_handler;
/* these two get messy on amd64 due to alignment differences */ /* these get messy on amd64 due to alignment differences */
#if defined(CONFIG_X86_64) #if defined(CONFIG_X86_64)
case FS_IOC_RESVSP_32: case FS_IOC_RESVSP_32:
case FS_IOC_RESVSP64_32: case FS_IOC_RESVSP64_32:
error = compat_ioctl_preallocate(f.file, compat_ptr(arg)); error = compat_ioctl_preallocate(f.file, 0, compat_ptr(arg));
goto out_fput;
case FS_IOC_UNRESVSP_32:
case FS_IOC_UNRESVSP64_32:
error = compat_ioctl_preallocate(f.file, FALLOC_FL_PUNCH_HOLE,
compat_ptr(arg));
goto out_fput;
case FS_IOC_ZERO_RANGE_32:
error = compat_ioctl_preallocate(f.file, FALLOC_FL_ZERO_RANGE,
compat_ptr(arg));
goto out_fput; goto out_fput;
#else #else
case FS_IOC_RESVSP: case FS_IOC_RESVSP:
case FS_IOC_RESVSP64: case FS_IOC_RESVSP64:
case FS_IOC_UNRESVSP:
case FS_IOC_UNRESVSP64:
case FS_IOC_ZERO_RANGE:
goto found_handler; goto found_handler;
#endif #endif

View File

@ -467,7 +467,7 @@ EXPORT_SYMBOL(generic_block_fiemap);
* Only the l_start, l_len and l_whence fields of the 'struct space_resv' * Only the l_start, l_len and l_whence fields of the 'struct space_resv'
* are used here, rest are ignored. * are used here, rest are ignored.
*/ */
int ioctl_preallocate(struct file *filp, void __user *argp) int ioctl_preallocate(struct file *filp, int mode, void __user *argp)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct space_resv sr; struct space_resv sr;
@ -488,13 +488,14 @@ int ioctl_preallocate(struct file *filp, void __user *argp)
return -EINVAL; return -EINVAL;
} }
return vfs_fallocate(filp, FALLOC_FL_KEEP_SIZE, sr.l_start, sr.l_len); return vfs_fallocate(filp, mode | FALLOC_FL_KEEP_SIZE, sr.l_start,
sr.l_len);
} }
/* on ia32 l_start is on a 32-bit boundary */ /* on ia32 l_start is on a 32-bit boundary */
#if defined CONFIG_COMPAT && defined(CONFIG_X86_64) #if defined CONFIG_COMPAT && defined(CONFIG_X86_64)
/* just account for different alignment */ /* just account for different alignment */
int compat_ioctl_preallocate(struct file *file, int compat_ioctl_preallocate(struct file *file, int mode,
struct space_resv_32 __user *argp) struct space_resv_32 __user *argp)
{ {
struct inode *inode = file_inode(file); struct inode *inode = file_inode(file);
@ -516,7 +517,7 @@ int compat_ioctl_preallocate(struct file *file,
return -EINVAL; return -EINVAL;
} }
return vfs_fallocate(file, FALLOC_FL_KEEP_SIZE, sr.l_start, sr.l_len); return vfs_fallocate(file, mode | FALLOC_FL_KEEP_SIZE, sr.l_start, sr.l_len);
} }
#endif #endif
@ -533,7 +534,12 @@ static int file_ioctl(struct file *filp, unsigned int cmd,
return put_user(i_size_read(inode) - filp->f_pos, p); return put_user(i_size_read(inode) - filp->f_pos, p);
case FS_IOC_RESVSP: case FS_IOC_RESVSP:
case FS_IOC_RESVSP64: case FS_IOC_RESVSP64:
return ioctl_preallocate(filp, p); return ioctl_preallocate(filp, 0, p);
case FS_IOC_UNRESVSP:
case FS_IOC_UNRESVSP64:
return ioctl_preallocate(filp, FALLOC_FL_PUNCH_HOLE, p);
case FS_IOC_ZERO_RANGE:
return ioctl_preallocate(filp, FALLOC_FL_ZERO_RANGE, p);
} }
return vfs_ioctl(filp, cmd, arg); return vfs_ioctl(filp, cmd, arg);

View File

@ -27,7 +27,6 @@ xfs-y += $(addprefix libxfs/, \
xfs_bmap_btree.o \ xfs_bmap_btree.o \
xfs_btree.o \ xfs_btree.o \
xfs_da_btree.o \ xfs_da_btree.o \
xfs_da_format.o \
xfs_defer.o \ xfs_defer.o \
xfs_dir2.o \ xfs_dir2.o \
xfs_dir2_block.o \ xfs_dir2_block.o \

View File

@ -32,7 +32,7 @@ kmem_alloc(size_t size, xfs_km_flags_t flags)
/* /*
* __vmalloc() will allocate data pages and auxillary structures (e.g. * __vmalloc() will allocate data pages and auxiliary structures (e.g.
* pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence * pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence
* we need to tell memory reclaim that we are in such a context via * we need to tell memory reclaim that we are in such a context via
* PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here * PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here

View File

@ -78,39 +78,9 @@ kmem_zalloc_large(size_t size, xfs_km_flags_t flags)
* Zone interfaces * Zone interfaces
*/ */
#define KM_ZONE_HWALIGN SLAB_HWCACHE_ALIGN
#define KM_ZONE_RECLAIM SLAB_RECLAIM_ACCOUNT
#define KM_ZONE_SPREAD SLAB_MEM_SPREAD
#define KM_ZONE_ACCOUNT SLAB_ACCOUNT
#define kmem_zone kmem_cache #define kmem_zone kmem_cache
#define kmem_zone_t struct kmem_cache #define kmem_zone_t struct kmem_cache
static inline kmem_zone_t *
kmem_zone_init(int size, char *zone_name)
{
return kmem_cache_create(zone_name, size, 0, 0, NULL);
}
static inline kmem_zone_t *
kmem_zone_init_flags(int size, char *zone_name, slab_flags_t flags,
void (*construct)(void *))
{
return kmem_cache_create(zone_name, size, 0, flags, construct);
}
static inline void
kmem_zone_free(kmem_zone_t *zone, void *ptr)
{
kmem_cache_free(zone, ptr);
}
static inline void
kmem_zone_destroy(kmem_zone_t *zone)
{
kmem_cache_destroy(zone);
}
extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t); extern void *kmem_zone_alloc(kmem_zone_t *, xfs_km_flags_t);
static inline void * static inline void *

View File

@ -19,6 +19,8 @@
#include "xfs_btree.h" #include "xfs_btree.h"
#include "xfs_refcount_btree.h" #include "xfs_refcount_btree.h"
#include "xfs_ialloc_btree.h" #include "xfs_ialloc_btree.h"
#include "xfs_sb.h"
#include "xfs_ag_resv.h"
/* /*
* Per-AG Block Reservations * Per-AG Block Reservations

File diff suppressed because it is too large Load Diff

View File

@ -54,7 +54,6 @@ typedef struct xfs_alloc_arg {
struct xfs_mount *mp; /* file system mount point */ struct xfs_mount *mp; /* file system mount point */
struct xfs_buf *agbp; /* buffer for a.g. freelist header */ struct xfs_buf *agbp; /* buffer for a.g. freelist header */
struct xfs_perag *pag; /* per-ag struct for this agno */ struct xfs_perag *pag; /* per-ag struct for this agno */
struct xfs_inode *ip; /* for userdata zeroing method */
xfs_fsblock_t fsbno; /* file system block number */ xfs_fsblock_t fsbno; /* file system block number */
xfs_agnumber_t agno; /* allocation group number */ xfs_agnumber_t agno; /* allocation group number */
xfs_agblock_t agbno; /* allocation group-relative block # */ xfs_agblock_t agbno; /* allocation group-relative block # */
@ -83,20 +82,7 @@ typedef struct xfs_alloc_arg {
*/ */
#define XFS_ALLOC_USERDATA (1 << 0)/* allocation is for user data*/ #define XFS_ALLOC_USERDATA (1 << 0)/* allocation is for user data*/
#define XFS_ALLOC_INITIAL_USER_DATA (1 << 1)/* special case start of file */ #define XFS_ALLOC_INITIAL_USER_DATA (1 << 1)/* special case start of file */
#define XFS_ALLOC_USERDATA_ZERO (1 << 2)/* zero extent on allocation */ #define XFS_ALLOC_NOBUSY (1 << 2)/* Busy extents not allowed */
#define XFS_ALLOC_NOBUSY (1 << 3)/* Busy extents not allowed */
static inline bool
xfs_alloc_is_userdata(int datatype)
{
return (datatype & ~XFS_ALLOC_NOBUSY) != 0;
}
static inline bool
xfs_alloc_allow_busy_reuse(int datatype)
{
return (datatype & XFS_ALLOC_NOBUSY) == 0;
}
/* freespace limit calculations */ /* freespace limit calculations */
#define XFS_ALLOC_AGFL_RESERVE 4 #define XFS_ALLOC_AGFL_RESERVE 4

View File

@ -507,6 +507,7 @@ xfs_allocbt_init_cursor(
cur->bc_private.a.agbp = agbp; cur->bc_private.a.agbp = agbp;
cur->bc_private.a.agno = agno; cur->bc_private.a.agno = agno;
cur->bc_private.a.priv.abt.active = false;
if (xfs_sb_version_hascrc(&mp->m_sb)) if (xfs_sb_version_hascrc(&mp->m_sb))
cur->bc_flags |= XFS_BTREE_CRC_BLOCKS; cur->bc_flags |= XFS_BTREE_CRC_BLOCKS;

View File

@ -589,7 +589,7 @@ xfs_attr_leaf_addname(
*/ */
dp = args->dp; dp = args->dp;
args->blkno = 0; args->blkno = 0;
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp);
if (error) if (error)
return error; return error;
@ -715,7 +715,7 @@ xfs_attr_leaf_addname(
* remove the "old" attr from that block (neat, huh!) * remove the "old" attr from that block (neat, huh!)
*/ */
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno,
-1, &bp); &bp);
if (error) if (error)
return error; return error;
@ -769,7 +769,7 @@ xfs_attr_leaf_removename(
*/ */
dp = args->dp; dp = args->dp;
args->blkno = 0; args->blkno = 0;
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp);
if (error) if (error)
return error; return error;
@ -813,7 +813,7 @@ xfs_attr_leaf_get(xfs_da_args_t *args)
trace_xfs_attr_leaf_get(args); trace_xfs_attr_leaf_get(args);
args->blkno = 0; args->blkno = 0;
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp);
if (error) if (error)
return error; return error;
@ -1173,7 +1173,7 @@ xfs_attr_node_removename(
ASSERT(state->path.blk[0].bp); ASSERT(state->path.blk[0].bp);
state->path.blk[0].bp = NULL; state->path.blk[0].bp = NULL;
error = xfs_attr3_leaf_read(args->trans, args->dp, 0, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp);
if (error) if (error)
goto out; goto out;
@ -1266,10 +1266,9 @@ xfs_attr_refillstate(xfs_da_state_t *state)
ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH)); ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH));
for (blk = path->blk, level = 0; level < path->active; blk++, level++) { for (blk = path->blk, level = 0; level < path->active; blk++, level++) {
if (blk->disk_blkno) { if (blk->disk_blkno) {
error = xfs_da3_node_read(state->args->trans, error = xfs_da3_node_read_mapped(state->args->trans,
state->args->dp, state->args->dp, blk->disk_blkno,
blk->blkno, blk->disk_blkno, &blk->bp, XFS_ATTR_FORK);
&blk->bp, XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
} else { } else {
@ -1285,10 +1284,9 @@ xfs_attr_refillstate(xfs_da_state_t *state)
ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH)); ASSERT((path->active >= 0) && (path->active < XFS_DA_NODE_MAXDEPTH));
for (blk = path->blk, level = 0; level < path->active; blk++, level++) { for (blk = path->blk, level = 0; level < path->active; blk++, level++) {
if (blk->disk_blkno) { if (blk->disk_blkno) {
error = xfs_da3_node_read(state->args->trans, error = xfs_da3_node_read_mapped(state->args->trans,
state->args->dp, state->args->dp, blk->disk_blkno,
blk->blkno, blk->disk_blkno, &blk->bp, XFS_ATTR_FORK);
&blk->bp, XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
} else { } else {

View File

@ -232,6 +232,61 @@ xfs_attr3_leaf_hdr_to_disk(
} }
} }
static xfs_failaddr_t
xfs_attr3_leaf_verify_entry(
struct xfs_mount *mp,
char *buf_end,
struct xfs_attr_leafblock *leaf,
struct xfs_attr3_icleaf_hdr *leafhdr,
struct xfs_attr_leaf_entry *ent,
int idx,
__u32 *last_hashval)
{
struct xfs_attr_leaf_name_local *lentry;
struct xfs_attr_leaf_name_remote *rentry;
char *name_end;
unsigned int nameidx;
unsigned int namesize;
__u32 hashval;
/* hash order check */
hashval = be32_to_cpu(ent->hashval);
if (hashval < *last_hashval)
return __this_address;
*last_hashval = hashval;
nameidx = be16_to_cpu(ent->nameidx);
if (nameidx < leafhdr->firstused || nameidx >= mp->m_attr_geo->blksize)
return __this_address;
/*
* Check the name information. The namelen fields are u8 so we can't
* possibly exceed the maximum name length of 255 bytes.
*/
if (ent->flags & XFS_ATTR_LOCAL) {
lentry = xfs_attr3_leaf_name_local(leaf, idx);
namesize = xfs_attr_leaf_entsize_local(lentry->namelen,
be16_to_cpu(lentry->valuelen));
name_end = (char *)lentry + namesize;
if (lentry->namelen == 0)
return __this_address;
} else {
rentry = xfs_attr3_leaf_name_remote(leaf, idx);
namesize = xfs_attr_leaf_entsize_remote(rentry->namelen);
name_end = (char *)rentry + namesize;
if (rentry->namelen == 0)
return __this_address;
if (!(ent->flags & XFS_ATTR_INCOMPLETE) &&
rentry->valueblk == 0)
return __this_address;
}
if (name_end > buf_end)
return __this_address;
return NULL;
}
static xfs_failaddr_t static xfs_failaddr_t
xfs_attr3_leaf_verify( xfs_attr3_leaf_verify(
struct xfs_buf *bp) struct xfs_buf *bp)
@ -240,7 +295,10 @@ xfs_attr3_leaf_verify(
struct xfs_mount *mp = bp->b_mount; struct xfs_mount *mp = bp->b_mount;
struct xfs_attr_leafblock *leaf = bp->b_addr; struct xfs_attr_leafblock *leaf = bp->b_addr;
struct xfs_attr_leaf_entry *entries; struct xfs_attr_leaf_entry *entries;
struct xfs_attr_leaf_entry *ent;
char *buf_end;
uint32_t end; /* must be 32bit - see below */ uint32_t end; /* must be 32bit - see below */
__u32 last_hashval = 0;
int i; int i;
xfs_failaddr_t fa; xfs_failaddr_t fa;
@ -273,8 +331,13 @@ xfs_attr3_leaf_verify(
(char *)bp->b_addr + ichdr.firstused) (char *)bp->b_addr + ichdr.firstused)
return __this_address; return __this_address;
/* XXX: need to range check rest of attr header values */ buf_end = (char *)bp->b_addr + mp->m_attr_geo->blksize;
/* XXX: hash order check? */ for (i = 0, ent = entries; i < ichdr.count; ent++, i++) {
fa = xfs_attr3_leaf_verify_entry(mp, buf_end, leaf, &ichdr,
ent, i, &last_hashval);
if (fa)
return fa;
}
/* /*
* Quickly check the freemap information. Attribute data has to be * Quickly check the freemap information. Attribute data has to be
@ -367,13 +430,12 @@ xfs_attr3_leaf_read(
struct xfs_trans *tp, struct xfs_trans *tp,
struct xfs_inode *dp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_dablk_t bno,
xfs_daddr_t mappedbno,
struct xfs_buf **bpp) struct xfs_buf **bpp)
{ {
int err; int err;
err = xfs_da_read_buf(tp, dp, bno, mappedbno, bpp, err = xfs_da_read_buf(tp, dp, bno, 0, bpp, XFS_ATTR_FORK,
XFS_ATTR_FORK, &xfs_attr3_leaf_buf_ops); &xfs_attr3_leaf_buf_ops);
if (!err && tp && *bpp) if (!err && tp && *bpp)
xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_ATTR_LEAF_BUF); xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_ATTR_LEAF_BUF);
return err; return err;
@ -453,13 +515,15 @@ xfs_attr_copy_value(
* special case for dev/uuid inodes, they have fixed size data forks. * special case for dev/uuid inodes, they have fixed size data forks.
*/ */
int int
xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes) xfs_attr_shortform_bytesfit(
struct xfs_inode *dp,
int bytes)
{ {
int offset; struct xfs_mount *mp = dp->i_mount;
int minforkoff; /* lower limit on valid forkoff locations */ int64_t dsize;
int maxforkoff; /* upper limit on valid forkoff locations */ int minforkoff;
int dsize; int maxforkoff;
xfs_mount_t *mp = dp->i_mount; int offset;
/* rounded down */ /* rounded down */
offset = (XFS_LITINO(mp, dp->i_d.di_version) - bytes) >> 3; offset = (XFS_LITINO(mp, dp->i_d.di_version) - bytes) >> 3;
@ -525,7 +589,7 @@ xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes)
* A data fork btree root must have space for at least * A data fork btree root must have space for at least
* MINDBTPTRS key/ptr pairs if the data fork is small or empty. * MINDBTPTRS key/ptr pairs if the data fork is small or empty.
*/ */
minforkoff = max(dsize, XFS_BMDR_SPACE_CALC(MINDBTPTRS)); minforkoff = max_t(int64_t, dsize, XFS_BMDR_SPACE_CALC(MINDBTPTRS));
minforkoff = roundup(minforkoff, 8) >> 3; minforkoff = roundup(minforkoff, 8) >> 3;
/* attr fork btree root can have at least this many key/ptr pairs */ /* attr fork btree root can have at least this many key/ptr pairs */
@ -764,7 +828,7 @@ xfs_attr_shortform_lookup(xfs_da_args_t *args)
} }
/* /*
* Retreive the attribute value and length. * Retrieve the attribute value and length.
* *
* If ATTR_KERNOVAL is specified, only the length needs to be returned. * If ATTR_KERNOVAL is specified, only the length needs to be returned.
* Unlike a lookup, we only return an error if the attribute does not * Unlike a lookup, we only return an error if the attribute does not
@ -924,7 +988,7 @@ xfs_attr_shortform_verify(
char *endp; char *endp;
struct xfs_ifork *ifp; struct xfs_ifork *ifp;
int i; int i;
int size; int64_t size;
ASSERT(ip->i_d.di_aformat == XFS_DINODE_FMT_LOCAL); ASSERT(ip->i_d.di_aformat == XFS_DINODE_FMT_LOCAL);
ifp = XFS_IFORK_PTR(ip, XFS_ATTR_FORK); ifp = XFS_IFORK_PTR(ip, XFS_ATTR_FORK);
@ -1080,7 +1144,6 @@ xfs_attr3_leaf_to_node(
struct xfs_attr_leafblock *leaf; struct xfs_attr_leafblock *leaf;
struct xfs_attr3_icleaf_hdr icleafhdr; struct xfs_attr3_icleaf_hdr icleafhdr;
struct xfs_attr_leaf_entry *entries; struct xfs_attr_leaf_entry *entries;
struct xfs_da_node_entry *btree;
struct xfs_da3_icnode_hdr icnodehdr; struct xfs_da3_icnode_hdr icnodehdr;
struct xfs_da_intnode *node; struct xfs_da_intnode *node;
struct xfs_inode *dp = args->dp; struct xfs_inode *dp = args->dp;
@ -1095,11 +1158,11 @@ xfs_attr3_leaf_to_node(
error = xfs_da_grow_inode(args, &blkno); error = xfs_da_grow_inode(args, &blkno);
if (error) if (error)
goto out; goto out;
error = xfs_attr3_leaf_read(args->trans, dp, 0, -1, &bp1); error = xfs_attr3_leaf_read(args->trans, dp, 0, &bp1);
if (error) if (error)
goto out; goto out;
error = xfs_da_get_buf(args->trans, dp, blkno, -1, &bp2, XFS_ATTR_FORK); error = xfs_da_get_buf(args->trans, dp, blkno, &bp2, XFS_ATTR_FORK);
if (error) if (error)
goto out; goto out;
@ -1120,18 +1183,17 @@ xfs_attr3_leaf_to_node(
if (error) if (error)
goto out; goto out;
node = bp1->b_addr; node = bp1->b_addr;
dp->d_ops->node_hdr_from_disk(&icnodehdr, node); xfs_da3_node_hdr_from_disk(mp, &icnodehdr, node);
btree = dp->d_ops->node_tree_p(node);
leaf = bp2->b_addr; leaf = bp2->b_addr;
xfs_attr3_leaf_hdr_from_disk(args->geo, &icleafhdr, leaf); xfs_attr3_leaf_hdr_from_disk(args->geo, &icleafhdr, leaf);
entries = xfs_attr3_leaf_entryp(leaf); entries = xfs_attr3_leaf_entryp(leaf);
/* both on-disk, don't endian-flip twice */ /* both on-disk, don't endian-flip twice */
btree[0].hashval = entries[icleafhdr.count - 1].hashval; icnodehdr.btree[0].hashval = entries[icleafhdr.count - 1].hashval;
btree[0].before = cpu_to_be32(blkno); icnodehdr.btree[0].before = cpu_to_be32(blkno);
icnodehdr.count = 1; icnodehdr.count = 1;
dp->d_ops->node_hdr_to_disk(node, &icnodehdr); xfs_da3_node_hdr_to_disk(dp->i_mount, node, &icnodehdr);
xfs_trans_log_buf(args->trans, bp1, 0, args->geo->blksize - 1); xfs_trans_log_buf(args->trans, bp1, 0, args->geo->blksize - 1);
error = 0; error = 0;
out: out:
@ -1161,7 +1223,7 @@ xfs_attr3_leaf_create(
trace_xfs_attr_leaf_create(args); trace_xfs_attr_leaf_create(args);
error = xfs_da_get_buf(args->trans, args->dp, blkno, -1, &bp, error = xfs_da_get_buf(args->trans, args->dp, blkno, &bp,
XFS_ATTR_FORK); XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
@ -1447,7 +1509,9 @@ xfs_attr3_leaf_add_work(
for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) { for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
if (ichdr->freemap[i].base == tmp) { if (ichdr->freemap[i].base == tmp) {
ichdr->freemap[i].base += sizeof(xfs_attr_leaf_entry_t); ichdr->freemap[i].base += sizeof(xfs_attr_leaf_entry_t);
ichdr->freemap[i].size -= sizeof(xfs_attr_leaf_entry_t); ichdr->freemap[i].size -=
min_t(uint16_t, ichdr->freemap[i].size,
sizeof(xfs_attr_leaf_entry_t));
} }
} }
ichdr->usedbytes += xfs_attr_leaf_entsize(leaf, args->index); ichdr->usedbytes += xfs_attr_leaf_entsize(leaf, args->index);
@ -1931,7 +1995,7 @@ xfs_attr3_leaf_toosmall(
if (blkno == 0) if (blkno == 0)
continue; continue;
error = xfs_attr3_leaf_read(state->args->trans, state->args->dp, error = xfs_attr3_leaf_read(state->args->trans, state->args->dp,
blkno, -1, &bp); blkno, &bp);
if (error) if (error)
return error; return error;
@ -2281,8 +2345,10 @@ xfs_attr3_leaf_lookup_int(
leaf = bp->b_addr; leaf = bp->b_addr;
xfs_attr3_leaf_hdr_from_disk(args->geo, &ichdr, leaf); xfs_attr3_leaf_hdr_from_disk(args->geo, &ichdr, leaf);
entries = xfs_attr3_leaf_entryp(leaf); entries = xfs_attr3_leaf_entryp(leaf);
if (ichdr.count >= args->geo->blksize / 8) if (ichdr.count >= args->geo->blksize / 8) {
xfs_buf_corruption_error(bp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
/* /*
* Binary search. (note: small blocks will skip this loop) * Binary search. (note: small blocks will skip this loop)
@ -2298,10 +2364,14 @@ xfs_attr3_leaf_lookup_int(
else else
break; break;
} }
if (!(probe >= 0 && (!ichdr.count || probe < ichdr.count))) if (!(probe >= 0 && (!ichdr.count || probe < ichdr.count))) {
xfs_buf_corruption_error(bp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
if (!(span <= 4 || be32_to_cpu(entry->hashval) == hashval)) }
if (!(span <= 4 || be32_to_cpu(entry->hashval) == hashval)) {
xfs_buf_corruption_error(bp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
/* /*
* Since we may have duplicate hashval's, find the first matching * Since we may have duplicate hashval's, find the first matching
@ -2661,7 +2731,7 @@ xfs_attr3_leaf_clearflag(
/* /*
* Set up the operation. * Set up the operation.
*/ */
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp);
if (error) if (error)
return error; return error;
@ -2728,7 +2798,7 @@ xfs_attr3_leaf_setflag(
/* /*
* Set up the operation. * Set up the operation.
*/ */
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp);
if (error) if (error)
return error; return error;
@ -2790,7 +2860,7 @@ xfs_attr3_leaf_flipflags(
/* /*
* Read the block containing the "old" attr * Read the block containing the "old" attr
*/ */
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, -1, &bp1); error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno, &bp1);
if (error) if (error)
return error; return error;
@ -2799,7 +2869,7 @@ xfs_attr3_leaf_flipflags(
*/ */
if (args->blkno2 != args->blkno) { if (args->blkno2 != args->blkno) {
error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno2, error = xfs_attr3_leaf_read(args->trans, args->dp, args->blkno2,
-1, &bp2); &bp2);
if (error) if (error)
return error; return error;
} else { } else {

View File

@ -16,6 +16,29 @@ struct xfs_da_state_blk;
struct xfs_inode; struct xfs_inode;
struct xfs_trans; struct xfs_trans;
/*
* Incore version of the attribute leaf header.
*/
struct xfs_attr3_icleaf_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t usedbytes;
/*
* Firstused is 32-bit here instead of 16-bit like the on-disk variant
* to support maximum fsb size of 64k without overflow issues throughout
* the attr code. Instead, the overflow condition is handled on
* conversion to/from disk.
*/
uint32_t firstused;
__u8 holes;
struct {
uint16_t base;
uint16_t size;
} freemap[XFS_ATTR_LEAF_MAPSIZE];
};
/* /*
* Used to keep a list of "remote value" extents when unlinking an inode. * Used to keep a list of "remote value" extents when unlinking an inode.
*/ */
@ -67,8 +90,8 @@ int xfs_attr3_leaf_add(struct xfs_buf *leaf_buffer,
struct xfs_da_args *args); struct xfs_da_args *args);
int xfs_attr3_leaf_remove(struct xfs_buf *leaf_buffer, int xfs_attr3_leaf_remove(struct xfs_buf *leaf_buffer,
struct xfs_da_args *args); struct xfs_da_args *args);
void xfs_attr3_leaf_list_int(struct xfs_buf *bp, int xfs_attr3_leaf_list_int(struct xfs_buf *bp,
struct xfs_attr_list_context *context); struct xfs_attr_list_context *context);
/* /*
* Routines used for shrinking the Btree. * Routines used for shrinking the Btree.
@ -85,8 +108,7 @@ int xfs_attr_leaf_order(struct xfs_buf *leaf1_bp,
struct xfs_buf *leaf2_bp); struct xfs_buf *leaf2_bp);
int xfs_attr_leaf_newentsize(struct xfs_da_args *args, int *local); int xfs_attr_leaf_newentsize(struct xfs_da_args *args, int *local);
int xfs_attr3_leaf_read(struct xfs_trans *tp, struct xfs_inode *dp, int xfs_attr3_leaf_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_daddr_t mappedbno, xfs_dablk_t bno, struct xfs_buf **bpp);
struct xfs_buf **bpp);
void xfs_attr3_leaf_hdr_from_disk(struct xfs_da_geometry *geo, void xfs_attr3_leaf_hdr_from_disk(struct xfs_da_geometry *geo,
struct xfs_attr3_icleaf_hdr *to, struct xfs_attr3_icleaf_hdr *to,
struct xfs_attr_leafblock *from); struct xfs_attr_leafblock *from);

View File

@ -19,6 +19,7 @@
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_bmap.h" #include "xfs_bmap.h"
#include "xfs_attr.h" #include "xfs_attr.h"
#include "xfs_attr_remote.h"
#include "xfs_trace.h" #include "xfs_trace.h"
#include "xfs_error.h" #include "xfs_error.h"

View File

@ -5,6 +5,7 @@
*/ */
#include "xfs.h" #include "xfs.h"
#include "xfs_log_format.h" #include "xfs_log_format.h"
#include "xfs_bit.h"
/* /*
* XFS bit manipulation routines, used in non-realtime code. * XFS bit manipulation routines, used in non-realtime code.

File diff suppressed because it is too large Load Diff

View File

@ -105,11 +105,10 @@ xfs_btree_check_lblock(
xfs_failaddr_t fa; xfs_failaddr_t fa;
fa = __xfs_btree_check_lblock(cur, block, level, bp); fa = __xfs_btree_check_lblock(cur, block, level, bp);
if (unlikely(XFS_TEST_ERROR(fa != NULL, mp, if (XFS_IS_CORRUPT(mp, fa != NULL) ||
XFS_ERRTAG_BTREE_CHECK_LBLOCK))) { XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BTREE_CHECK_LBLOCK)) {
if (bp) if (bp)
trace_xfs_btree_corrupt(bp, _RET_IP_); trace_xfs_btree_corrupt(bp, _RET_IP_);
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;
@ -169,11 +168,10 @@ xfs_btree_check_sblock(
xfs_failaddr_t fa; xfs_failaddr_t fa;
fa = __xfs_btree_check_sblock(cur, block, level, bp); fa = __xfs_btree_check_sblock(cur, block, level, bp);
if (unlikely(XFS_TEST_ERROR(fa != NULL, mp, if (XFS_IS_CORRUPT(mp, fa != NULL) ||
XFS_ERRTAG_BTREE_CHECK_SBLOCK))) { XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BTREE_CHECK_SBLOCK)) {
if (bp) if (bp)
trace_xfs_btree_corrupt(bp, _RET_IP_); trace_xfs_btree_corrupt(bp, _RET_IP_);
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;
@ -384,7 +382,7 @@ xfs_btree_del_cursor(
/* /*
* Free the cursor. * Free the cursor.
*/ */
kmem_zone_free(xfs_btree_cur_zone, cur); kmem_cache_free(xfs_btree_cur_zone, cur);
} }
/* /*
@ -716,25 +714,6 @@ xfs_btree_get_bufs(
return xfs_trans_get_buf(tp, mp->m_ddev_targp, d, mp->m_bsize, 0); return xfs_trans_get_buf(tp, mp->m_ddev_targp, d, mp->m_bsize, 0);
} }
/*
* Check for the cursor referring to the last block at the given level.
*/
int /* 1=is last block, 0=not last block */
xfs_btree_islastblock(
xfs_btree_cur_t *cur, /* btree cursor */
int level) /* level to check */
{
struct xfs_btree_block *block; /* generic btree block pointer */
xfs_buf_t *bp; /* buffer containing block */
block = xfs_btree_get_block(cur, level, &bp);
xfs_btree_check_block(cur, block, level, bp);
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
return block->bb_u.l.bb_rightsib == cpu_to_be64(NULLFSBLOCK);
else
return block->bb_u.s.bb_rightsib == cpu_to_be32(NULLAGBLOCK);
}
/* /*
* Change the cursor to point to the first record at the given level. * Change the cursor to point to the first record at the given level.
* Other levels are unaffected. * Other levels are unaffected.
@ -1820,6 +1799,7 @@ xfs_btree_lookup_get_block(
out_bad: out_bad:
*blkp = NULL; *blkp = NULL;
xfs_buf_corruption_error(bp);
xfs_trans_brelse(cur->bc_tp, bp); xfs_trans_brelse(cur->bc_tp, bp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -1867,7 +1847,7 @@ xfs_btree_lookup(
XFS_BTREE_STATS_INC(cur, lookup); XFS_BTREE_STATS_INC(cur, lookup);
/* No such thing as a zero-level tree. */ /* No such thing as a zero-level tree. */
if (cur->bc_nlevels == 0) if (XFS_IS_CORRUPT(cur->bc_mp, cur->bc_nlevels == 0))
return -EFSCORRUPTED; return -EFSCORRUPTED;
block = NULL; block = NULL;
@ -1987,7 +1967,8 @@ xfs_btree_lookup(
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
*stat = 1; *stat = 1;
return 0; return 0;
} }
@ -2442,7 +2423,10 @@ xfs_btree_lshift(
if (error) if (error)
goto error0; goto error0;
i = xfs_btree_firstrec(tcur, level); i = xfs_btree_firstrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(tcur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(tcur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_btree_decrement(tcur, level, &i); error = xfs_btree_decrement(tcur, level, &i);
if (error) if (error)
@ -2609,7 +2593,10 @@ xfs_btree_rshift(
if (error) if (error)
goto error0; goto error0;
i = xfs_btree_lastrec(tcur, level); i = xfs_btree_lastrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(tcur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(tcur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_btree_increment(tcur, level, &i); error = xfs_btree_increment(tcur, level, &i);
if (error) if (error)
@ -3463,7 +3450,10 @@ xfs_btree_insert(
goto error0; goto error0;
} }
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
level++; level++;
/* /*
@ -3867,15 +3857,24 @@ xfs_btree_delrec(
* Actually any entry but the first would suffice. * Actually any entry but the first would suffice.
*/ */
i = xfs_btree_lastrec(tcur, level); i = xfs_btree_lastrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_btree_increment(tcur, level, &i); error = xfs_btree_increment(tcur, level, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
i = xfs_btree_lastrec(tcur, level); i = xfs_btree_lastrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
/* Grab a pointer to the block. */ /* Grab a pointer to the block. */
right = xfs_btree_get_block(tcur, level, &rbp); right = xfs_btree_get_block(tcur, level, &rbp);
@ -3919,12 +3918,18 @@ xfs_btree_delrec(
rrecs = xfs_btree_get_numrecs(right); rrecs = xfs_btree_get_numrecs(right);
if (!xfs_btree_ptr_is_null(cur, &lptr)) { if (!xfs_btree_ptr_is_null(cur, &lptr)) {
i = xfs_btree_firstrec(tcur, level); i = xfs_btree_firstrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_btree_decrement(tcur, level, &i); error = xfs_btree_decrement(tcur, level, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
} }
} }
@ -3938,13 +3943,19 @@ xfs_btree_delrec(
* previous block. * previous block.
*/ */
i = xfs_btree_firstrec(tcur, level); i = xfs_btree_firstrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_btree_decrement(tcur, level, &i); error = xfs_btree_decrement(tcur, level, &i);
if (error) if (error)
goto error0; goto error0;
i = xfs_btree_firstrec(tcur, level); i = xfs_btree_firstrec(tcur, level);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, error0); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
/* Grab a pointer to the block. */ /* Grab a pointer to the block. */
left = xfs_btree_get_block(tcur, level, &lbp); left = xfs_btree_get_block(tcur, level, &lbp);
@ -4286,6 +4297,7 @@ int
xfs_btree_visit_blocks( xfs_btree_visit_blocks(
struct xfs_btree_cur *cur, struct xfs_btree_cur *cur,
xfs_btree_visit_blocks_fn fn, xfs_btree_visit_blocks_fn fn,
unsigned int flags,
void *data) void *data)
{ {
union xfs_btree_ptr lptr; union xfs_btree_ptr lptr;
@ -4311,6 +4323,11 @@ xfs_btree_visit_blocks(
/* save for the next iteration of the loop */ /* save for the next iteration of the loop */
xfs_btree_copy_ptrs(cur, &lptr, ptr, 1); xfs_btree_copy_ptrs(cur, &lptr, ptr, 1);
if (!(flags & XFS_BTREE_VISIT_LEAVES))
continue;
} else if (!(flags & XFS_BTREE_VISIT_RECORDS)) {
continue;
} }
/* for each buffer in the level */ /* for each buffer in the level */
@ -4413,7 +4430,7 @@ xfs_btree_change_owner(
bbcoi.buffer_list = buffer_list; bbcoi.buffer_list = buffer_list;
return xfs_btree_visit_blocks(cur, xfs_btree_block_change_owner, return xfs_btree_visit_blocks(cur, xfs_btree_block_change_owner,
&bbcoi); XFS_BTREE_VISIT_ALL, &bbcoi);
} }
/* Verify the v5 fields of a long-format btree block. */ /* Verify the v5 fields of a long-format btree block. */
@ -4865,7 +4882,7 @@ xfs_btree_count_blocks(
{ {
*blocks = 0; *blocks = 0;
return xfs_btree_visit_blocks(cur, xfs_btree_count_blocks_helper, return xfs_btree_visit_blocks(cur, xfs_btree_count_blocks_helper,
blocks); XFS_BTREE_VISIT_ALL, blocks);
} }
/* Compare two btree pointers. */ /* Compare two btree pointers. */

View File

@ -183,6 +183,9 @@ union xfs_btree_cur_private {
unsigned long nr_ops; /* # record updates */ unsigned long nr_ops; /* # record updates */
int shape_changes; /* # of extent splits */ int shape_changes; /* # of extent splits */
} refc; } refc;
struct {
bool active; /* allocation cursor state */
} abt;
}; };
/* /*
@ -314,14 +317,6 @@ xfs_btree_get_bufs(
xfs_agnumber_t agno, /* allocation group number */ xfs_agnumber_t agno, /* allocation group number */
xfs_agblock_t agbno); /* allocation group block number */ xfs_agblock_t agbno); /* allocation group block number */
/*
* Check for the cursor referring to the last block at the given level.
*/
int /* 1=is last block, 0=not last block */
xfs_btree_islastblock(
xfs_btree_cur_t *cur, /* btree cursor */
int level); /* level to check */
/* /*
* Compute first and last byte offsets for the fields given. * Compute first and last byte offsets for the fields given.
* Interprets the offsets table, which contains struct field offsets. * Interprets the offsets table, which contains struct field offsets.
@ -482,8 +477,15 @@ int xfs_btree_query_all(struct xfs_btree_cur *cur, xfs_btree_query_range_fn fn,
typedef int (*xfs_btree_visit_blocks_fn)(struct xfs_btree_cur *cur, int level, typedef int (*xfs_btree_visit_blocks_fn)(struct xfs_btree_cur *cur, int level,
void *data); void *data);
/* Visit record blocks. */
#define XFS_BTREE_VISIT_RECORDS (1 << 0)
/* Visit leaf blocks. */
#define XFS_BTREE_VISIT_LEAVES (1 << 1)
/* Visit all blocks. */
#define XFS_BTREE_VISIT_ALL (XFS_BTREE_VISIT_RECORDS | \
XFS_BTREE_VISIT_LEAVES)
int xfs_btree_visit_blocks(struct xfs_btree_cur *cur, int xfs_btree_visit_blocks(struct xfs_btree_cur *cur,
xfs_btree_visit_blocks_fn fn, void *data); xfs_btree_visit_blocks_fn fn, unsigned int flags, void *data);
int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_extlen_t *blocks); int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_extlen_t *blocks);
@ -514,4 +516,21 @@ int xfs_btree_has_record(struct xfs_btree_cur *cur, union xfs_btree_irec *low,
union xfs_btree_irec *high, bool *exists); union xfs_btree_irec *high, bool *exists);
bool xfs_btree_has_more_records(struct xfs_btree_cur *cur); bool xfs_btree_has_more_records(struct xfs_btree_cur *cur);
/* Does this cursor point to the last block in the given level? */
static inline bool
xfs_btree_islastblock(
xfs_btree_cur_t *cur,
int level)
{
struct xfs_btree_block *block;
struct xfs_buf *bp;
block = xfs_btree_get_block(cur, level, &bp);
ASSERT(block && xfs_btree_check_block(cur, block, level, bp) == 0);
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
return block->bb_u.l.bb_rightsib == cpu_to_be64(NULLFSBLOCK);
return block->bb_u.s.bb_rightsib == cpu_to_be32(NULLAGBLOCK);
}
#endif /* __XFS_BTREE_H__ */ #endif /* __XFS_BTREE_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,6 @@
struct xfs_inode; struct xfs_inode;
struct xfs_trans; struct xfs_trans;
struct zone; struct zone;
struct xfs_dir_ops;
/* /*
* Directory/attribute geometry information. There will be one of these for each * Directory/attribute geometry information. There will be one of these for each
@ -18,15 +17,23 @@ struct xfs_dir_ops;
* structures will be attached to the xfs_mount. * structures will be attached to the xfs_mount.
*/ */
struct xfs_da_geometry { struct xfs_da_geometry {
int blksize; /* da block size in bytes */ unsigned int blksize; /* da block size in bytes */
int fsbcount; /* da block size in filesystem blocks */ unsigned int fsbcount; /* da block size in filesystem blocks */
uint8_t fsblog; /* log2 of _filesystem_ block size */ uint8_t fsblog; /* log2 of _filesystem_ block size */
uint8_t blklog; /* log2 of da block size */ uint8_t blklog; /* log2 of da block size */
uint node_ents; /* # of entries in a danode */ unsigned int node_hdr_size; /* danode header size in bytes */
int magicpct; /* 37% of block size in bytes */ unsigned int node_ents; /* # of entries in a danode */
unsigned int magicpct; /* 37% of block size in bytes */
xfs_dablk_t datablk; /* blockno of dir data v2 */ xfs_dablk_t datablk; /* blockno of dir data v2 */
unsigned int leaf_hdr_size; /* dir2 leaf header size */
unsigned int leaf_max_ents; /* # of entries in dir2 leaf */
xfs_dablk_t leafblk; /* blockno of leaf data v2 */ xfs_dablk_t leafblk; /* blockno of leaf data v2 */
unsigned int free_hdr_size; /* dir2 free header size */
unsigned int free_max_bests; /* # of bests entries in dir2 free */
xfs_dablk_t freeblk; /* blockno of free data v2 */ xfs_dablk_t freeblk; /* blockno of free data v2 */
xfs_dir2_data_aoff_t data_first_offset;
size_t data_entry_offset;
}; };
/*======================================================================== /*========================================================================
@ -124,6 +131,25 @@ typedef struct xfs_da_state {
/* for dirv2 extrablk is data */ /* for dirv2 extrablk is data */
} xfs_da_state_t; } xfs_da_state_t;
/*
* In-core version of the node header to abstract the differences in the v2 and
* v3 disk format of the headers. Callers need to convert to/from disk format as
* appropriate.
*/
struct xfs_da3_icnode_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t level;
/*
* Pointer to the on-disk format entries, which are behind the
* variable size (v4 vs v5) header in the on-disk block.
*/
struct xfs_da_node_entry *btree;
};
/* /*
* Utility macros to aid in logging changed structure fields. * Utility macros to aid in logging changed structure fields.
*/ */
@ -132,16 +158,6 @@ typedef struct xfs_da_state {
(uint)(XFS_DA_LOGOFF(BASE, ADDR)), \ (uint)(XFS_DA_LOGOFF(BASE, ADDR)), \
(uint)(XFS_DA_LOGOFF(BASE, ADDR)+(SIZE)-1) (uint)(XFS_DA_LOGOFF(BASE, ADDR)+(SIZE)-1)
/*
* Name ops for directory and/or attr name operations
*/
struct xfs_nameops {
xfs_dahash_t (*hashname)(struct xfs_name *);
enum xfs_dacmp (*compname)(struct xfs_da_args *,
const unsigned char *, int);
};
/*======================================================================== /*========================================================================
* Function prototypes. * Function prototypes.
*========================================================================*/ *========================================================================*/
@ -172,25 +188,28 @@ int xfs_da3_path_shift(xfs_da_state_t *state, xfs_da_state_path_t *path,
int xfs_da3_blk_link(xfs_da_state_t *state, xfs_da_state_blk_t *old_blk, int xfs_da3_blk_link(xfs_da_state_t *state, xfs_da_state_blk_t *old_blk,
xfs_da_state_blk_t *new_blk); xfs_da_state_blk_t *new_blk);
int xfs_da3_node_read(struct xfs_trans *tp, struct xfs_inode *dp, int xfs_da3_node_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_daddr_t mappedbno, xfs_dablk_t bno, struct xfs_buf **bpp, int whichfork);
struct xfs_buf **bpp, int which_fork); int xfs_da3_node_read_mapped(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_daddr_t mappedbno, struct xfs_buf **bpp,
int whichfork);
/* /*
* Utility routines. * Utility routines.
*/ */
#define XFS_DABUF_MAP_HOLE_OK (1 << 0)
int xfs_da_grow_inode(xfs_da_args_t *args, xfs_dablk_t *new_blkno); int xfs_da_grow_inode(xfs_da_args_t *args, xfs_dablk_t *new_blkno);
int xfs_da_grow_inode_int(struct xfs_da_args *args, xfs_fileoff_t *bno, int xfs_da_grow_inode_int(struct xfs_da_args *args, xfs_fileoff_t *bno,
int count); int count);
int xfs_da_get_buf(struct xfs_trans *trans, struct xfs_inode *dp, int xfs_da_get_buf(struct xfs_trans *trans, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_daddr_t mappedbno, xfs_dablk_t bno, struct xfs_buf **bp, int whichfork);
struct xfs_buf **bp, int whichfork);
int xfs_da_read_buf(struct xfs_trans *trans, struct xfs_inode *dp, int xfs_da_read_buf(struct xfs_trans *trans, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_daddr_t mappedbno, xfs_dablk_t bno, unsigned int flags, struct xfs_buf **bpp,
struct xfs_buf **bpp, int whichfork, int whichfork, const struct xfs_buf_ops *ops);
const struct xfs_buf_ops *ops);
int xfs_da_reada_buf(struct xfs_inode *dp, xfs_dablk_t bno, int xfs_da_reada_buf(struct xfs_inode *dp, xfs_dablk_t bno,
xfs_daddr_t mapped_bno, int whichfork, unsigned int flags, int whichfork,
const struct xfs_buf_ops *ops); const struct xfs_buf_ops *ops);
int xfs_da_shrink_inode(xfs_da_args_t *args, xfs_dablk_t dead_blkno, int xfs_da_shrink_inode(xfs_da_args_t *args, xfs_dablk_t dead_blkno,
struct xfs_buf *dead_buf); struct xfs_buf *dead_buf);
@ -202,7 +221,11 @@ enum xfs_dacmp xfs_da_compname(struct xfs_da_args *args,
xfs_da_state_t *xfs_da_state_alloc(void); xfs_da_state_t *xfs_da_state_alloc(void);
void xfs_da_state_free(xfs_da_state_t *state); void xfs_da_state_free(xfs_da_state_t *state);
void xfs_da3_node_hdr_from_disk(struct xfs_mount *mp,
struct xfs_da3_icnode_hdr *to, struct xfs_da_intnode *from);
void xfs_da3_node_hdr_to_disk(struct xfs_mount *mp,
struct xfs_da_intnode *to, struct xfs_da3_icnode_hdr *from);
extern struct kmem_zone *xfs_da_state_zone; extern struct kmem_zone *xfs_da_state_zone;
extern const struct xfs_nameops xfs_default_nameops;
#endif /* __XFS_DA_BTREE_H__ */ #endif /* __XFS_DA_BTREE_H__ */

View File

@ -1,888 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2000,2002,2005 Silicon Graphics, Inc.
* Copyright (c) 2013 Red Hat, Inc.
* All Rights Reserved.
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_shared.h"
#include "xfs_format.h"
#include "xfs_log_format.h"
#include "xfs_trans_resv.h"
#include "xfs_mount.h"
#include "xfs_inode.h"
#include "xfs_dir2.h"
/*
* Shortform directory ops
*/
static int
xfs_dir2_sf_entsize(
struct xfs_dir2_sf_hdr *hdr,
int len)
{
int count = sizeof(struct xfs_dir2_sf_entry); /* namelen + offset */
count += len; /* name */
count += hdr->i8count ? XFS_INO64_SIZE : XFS_INO32_SIZE; /* ino # */
return count;
}
static int
xfs_dir3_sf_entsize(
struct xfs_dir2_sf_hdr *hdr,
int len)
{
return xfs_dir2_sf_entsize(hdr, len) + sizeof(uint8_t);
}
static struct xfs_dir2_sf_entry *
xfs_dir2_sf_nextentry(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
return (struct xfs_dir2_sf_entry *)
((char *)sfep + xfs_dir2_sf_entsize(hdr, sfep->namelen));
}
static struct xfs_dir2_sf_entry *
xfs_dir3_sf_nextentry(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
return (struct xfs_dir2_sf_entry *)
((char *)sfep + xfs_dir3_sf_entsize(hdr, sfep->namelen));
}
/*
* For filetype enabled shortform directories, the file type field is stored at
* the end of the name. Because it's only a single byte, endian conversion is
* not necessary. For non-filetype enable directories, the type is always
* unknown and we never store the value.
*/
static uint8_t
xfs_dir2_sfe_get_ftype(
struct xfs_dir2_sf_entry *sfep)
{
return XFS_DIR3_FT_UNKNOWN;
}
static void
xfs_dir2_sfe_put_ftype(
struct xfs_dir2_sf_entry *sfep,
uint8_t ftype)
{
ASSERT(ftype < XFS_DIR3_FT_MAX);
}
static uint8_t
xfs_dir3_sfe_get_ftype(
struct xfs_dir2_sf_entry *sfep)
{
uint8_t ftype;
ftype = sfep->name[sfep->namelen];
if (ftype >= XFS_DIR3_FT_MAX)
return XFS_DIR3_FT_UNKNOWN;
return ftype;
}
static void
xfs_dir3_sfe_put_ftype(
struct xfs_dir2_sf_entry *sfep,
uint8_t ftype)
{
ASSERT(ftype < XFS_DIR3_FT_MAX);
sfep->name[sfep->namelen] = ftype;
}
/*
* Inode numbers in short-form directories can come in two versions,
* either 4 bytes or 8 bytes wide. These helpers deal with the
* two forms transparently by looking at the headers i8count field.
*
* For 64-bit inode number the most significant byte must be zero.
*/
static xfs_ino_t
xfs_dir2_sf_get_ino(
struct xfs_dir2_sf_hdr *hdr,
uint8_t *from)
{
if (hdr->i8count)
return get_unaligned_be64(from) & 0x00ffffffffffffffULL;
else
return get_unaligned_be32(from);
}
static void
xfs_dir2_sf_put_ino(
struct xfs_dir2_sf_hdr *hdr,
uint8_t *to,
xfs_ino_t ino)
{
ASSERT((ino & 0xff00000000000000ULL) == 0);
if (hdr->i8count)
put_unaligned_be64(ino, to);
else
put_unaligned_be32(ino, to);
}
static xfs_ino_t
xfs_dir2_sf_get_parent_ino(
struct xfs_dir2_sf_hdr *hdr)
{
return xfs_dir2_sf_get_ino(hdr, hdr->parent);
}
static void
xfs_dir2_sf_put_parent_ino(
struct xfs_dir2_sf_hdr *hdr,
xfs_ino_t ino)
{
xfs_dir2_sf_put_ino(hdr, hdr->parent, ino);
}
/*
* In short-form directory entries the inode numbers are stored at variable
* offset behind the entry name. If the entry stores a filetype value, then it
* sits between the name and the inode number. Hence the inode numbers may only
* be accessed through the helpers below.
*/
static xfs_ino_t
xfs_dir2_sfe_get_ino(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
return xfs_dir2_sf_get_ino(hdr, &sfep->name[sfep->namelen]);
}
static void
xfs_dir2_sfe_put_ino(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep,
xfs_ino_t ino)
{
xfs_dir2_sf_put_ino(hdr, &sfep->name[sfep->namelen], ino);
}
static xfs_ino_t
xfs_dir3_sfe_get_ino(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
return xfs_dir2_sf_get_ino(hdr, &sfep->name[sfep->namelen + 1]);
}
static void
xfs_dir3_sfe_put_ino(
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep,
xfs_ino_t ino)
{
xfs_dir2_sf_put_ino(hdr, &sfep->name[sfep->namelen + 1], ino);
}
/*
* Directory data block operations
*/
/*
* For special situations, the dirent size ends up fixed because we always know
* what the size of the entry is. That's true for the "." and "..", and
* therefore we know that they are a fixed size and hence their offsets are
* constant, as is the first entry.
*
* Hence, this calculation is written as a macro to be able to be calculated at
* compile time and so certain offsets can be calculated directly in the
* structure initaliser via the macro. There are two macros - one for dirents
* with ftype and without so there are no unresolvable conditionals in the
* calculations. We also use round_up() as XFS_DIR2_DATA_ALIGN is always a power
* of 2 and the compiler doesn't reject it (unlike roundup()).
*/
#define XFS_DIR2_DATA_ENTSIZE(n) \
round_up((offsetof(struct xfs_dir2_data_entry, name[0]) + (n) + \
sizeof(xfs_dir2_data_off_t)), XFS_DIR2_DATA_ALIGN)
#define XFS_DIR3_DATA_ENTSIZE(n) \
round_up((offsetof(struct xfs_dir2_data_entry, name[0]) + (n) + \
sizeof(xfs_dir2_data_off_t) + sizeof(uint8_t)), \
XFS_DIR2_DATA_ALIGN)
static int
xfs_dir2_data_entsize(
int n)
{
return XFS_DIR2_DATA_ENTSIZE(n);
}
static int
xfs_dir3_data_entsize(
int n)
{
return XFS_DIR3_DATA_ENTSIZE(n);
}
static uint8_t
xfs_dir2_data_get_ftype(
struct xfs_dir2_data_entry *dep)
{
return XFS_DIR3_FT_UNKNOWN;
}
static void
xfs_dir2_data_put_ftype(
struct xfs_dir2_data_entry *dep,
uint8_t ftype)
{
ASSERT(ftype < XFS_DIR3_FT_MAX);
}
static uint8_t
xfs_dir3_data_get_ftype(
struct xfs_dir2_data_entry *dep)
{
uint8_t ftype = dep->name[dep->namelen];
if (ftype >= XFS_DIR3_FT_MAX)
return XFS_DIR3_FT_UNKNOWN;
return ftype;
}
static void
xfs_dir3_data_put_ftype(
struct xfs_dir2_data_entry *dep,
uint8_t type)
{
ASSERT(type < XFS_DIR3_FT_MAX);
ASSERT(dep->namelen != 0);
dep->name[dep->namelen] = type;
}
/*
* Pointer to an entry's tag word.
*/
static __be16 *
xfs_dir2_data_entry_tag_p(
struct xfs_dir2_data_entry *dep)
{
return (__be16 *)((char *)dep +
xfs_dir2_data_entsize(dep->namelen) - sizeof(__be16));
}
static __be16 *
xfs_dir3_data_entry_tag_p(
struct xfs_dir2_data_entry *dep)
{
return (__be16 *)((char *)dep +
xfs_dir3_data_entsize(dep->namelen) - sizeof(__be16));
}
/*
* location of . and .. in data space (always block 0)
*/
static struct xfs_dir2_data_entry *
xfs_dir2_data_dot_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr));
}
static struct xfs_dir2_data_entry *
xfs_dir2_data_dotdot_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR2_DATA_ENTSIZE(1));
}
static struct xfs_dir2_data_entry *
xfs_dir2_data_first_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR2_DATA_ENTSIZE(1) +
XFS_DIR2_DATA_ENTSIZE(2));
}
static struct xfs_dir2_data_entry *
xfs_dir2_ftype_data_dotdot_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1));
}
static struct xfs_dir2_data_entry *
xfs_dir2_ftype_data_first_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1) +
XFS_DIR3_DATA_ENTSIZE(2));
}
static struct xfs_dir2_data_entry *
xfs_dir3_data_dot_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir3_data_hdr));
}
static struct xfs_dir2_data_entry *
xfs_dir3_data_dotdot_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir3_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1));
}
static struct xfs_dir2_data_entry *
xfs_dir3_data_first_entry_p(
struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir3_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1) +
XFS_DIR3_DATA_ENTSIZE(2));
}
static struct xfs_dir2_data_free *
xfs_dir2_data_bestfree_p(struct xfs_dir2_data_hdr *hdr)
{
return hdr->bestfree;
}
static struct xfs_dir2_data_free *
xfs_dir3_data_bestfree_p(struct xfs_dir2_data_hdr *hdr)
{
return ((struct xfs_dir3_data_hdr *)hdr)->best_free;
}
static struct xfs_dir2_data_entry *
xfs_dir2_data_entry_p(struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr));
}
static struct xfs_dir2_data_unused *
xfs_dir2_data_unused_p(struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_unused *)
((char *)hdr + sizeof(struct xfs_dir2_data_hdr));
}
static struct xfs_dir2_data_entry *
xfs_dir3_data_entry_p(struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
((char *)hdr + sizeof(struct xfs_dir3_data_hdr));
}
static struct xfs_dir2_data_unused *
xfs_dir3_data_unused_p(struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_unused *)
((char *)hdr + sizeof(struct xfs_dir3_data_hdr));
}
/*
* Directory Leaf block operations
*/
static int
xfs_dir2_max_leaf_ents(struct xfs_da_geometry *geo)
{
return (geo->blksize - sizeof(struct xfs_dir2_leaf_hdr)) /
(uint)sizeof(struct xfs_dir2_leaf_entry);
}
static struct xfs_dir2_leaf_entry *
xfs_dir2_leaf_ents_p(struct xfs_dir2_leaf *lp)
{
return lp->__ents;
}
static int
xfs_dir3_max_leaf_ents(struct xfs_da_geometry *geo)
{
return (geo->blksize - sizeof(struct xfs_dir3_leaf_hdr)) /
(uint)sizeof(struct xfs_dir2_leaf_entry);
}
static struct xfs_dir2_leaf_entry *
xfs_dir3_leaf_ents_p(struct xfs_dir2_leaf *lp)
{
return ((struct xfs_dir3_leaf *)lp)->__ents;
}
static void
xfs_dir2_leaf_hdr_from_disk(
struct xfs_dir3_icleaf_hdr *to,
struct xfs_dir2_leaf *from)
{
to->forw = be32_to_cpu(from->hdr.info.forw);
to->back = be32_to_cpu(from->hdr.info.back);
to->magic = be16_to_cpu(from->hdr.info.magic);
to->count = be16_to_cpu(from->hdr.count);
to->stale = be16_to_cpu(from->hdr.stale);
ASSERT(to->magic == XFS_DIR2_LEAF1_MAGIC ||
to->magic == XFS_DIR2_LEAFN_MAGIC);
}
static void
xfs_dir2_leaf_hdr_to_disk(
struct xfs_dir2_leaf *to,
struct xfs_dir3_icleaf_hdr *from)
{
ASSERT(from->magic == XFS_DIR2_LEAF1_MAGIC ||
from->magic == XFS_DIR2_LEAFN_MAGIC);
to->hdr.info.forw = cpu_to_be32(from->forw);
to->hdr.info.back = cpu_to_be32(from->back);
to->hdr.info.magic = cpu_to_be16(from->magic);
to->hdr.count = cpu_to_be16(from->count);
to->hdr.stale = cpu_to_be16(from->stale);
}
static void
xfs_dir3_leaf_hdr_from_disk(
struct xfs_dir3_icleaf_hdr *to,
struct xfs_dir2_leaf *from)
{
struct xfs_dir3_leaf_hdr *hdr3 = (struct xfs_dir3_leaf_hdr *)from;
to->forw = be32_to_cpu(hdr3->info.hdr.forw);
to->back = be32_to_cpu(hdr3->info.hdr.back);
to->magic = be16_to_cpu(hdr3->info.hdr.magic);
to->count = be16_to_cpu(hdr3->count);
to->stale = be16_to_cpu(hdr3->stale);
ASSERT(to->magic == XFS_DIR3_LEAF1_MAGIC ||
to->magic == XFS_DIR3_LEAFN_MAGIC);
}
static void
xfs_dir3_leaf_hdr_to_disk(
struct xfs_dir2_leaf *to,
struct xfs_dir3_icleaf_hdr *from)
{
struct xfs_dir3_leaf_hdr *hdr3 = (struct xfs_dir3_leaf_hdr *)to;
ASSERT(from->magic == XFS_DIR3_LEAF1_MAGIC ||
from->magic == XFS_DIR3_LEAFN_MAGIC);
hdr3->info.hdr.forw = cpu_to_be32(from->forw);
hdr3->info.hdr.back = cpu_to_be32(from->back);
hdr3->info.hdr.magic = cpu_to_be16(from->magic);
hdr3->count = cpu_to_be16(from->count);
hdr3->stale = cpu_to_be16(from->stale);
}
/*
* Directory/Attribute Node block operations
*/
static struct xfs_da_node_entry *
xfs_da2_node_tree_p(struct xfs_da_intnode *dap)
{
return dap->__btree;
}
static struct xfs_da_node_entry *
xfs_da3_node_tree_p(struct xfs_da_intnode *dap)
{
return ((struct xfs_da3_intnode *)dap)->__btree;
}
static void
xfs_da2_node_hdr_from_disk(
struct xfs_da3_icnode_hdr *to,
struct xfs_da_intnode *from)
{
ASSERT(from->hdr.info.magic == cpu_to_be16(XFS_DA_NODE_MAGIC));
to->forw = be32_to_cpu(from->hdr.info.forw);
to->back = be32_to_cpu(from->hdr.info.back);
to->magic = be16_to_cpu(from->hdr.info.magic);
to->count = be16_to_cpu(from->hdr.__count);
to->level = be16_to_cpu(from->hdr.__level);
}
static void
xfs_da2_node_hdr_to_disk(
struct xfs_da_intnode *to,
struct xfs_da3_icnode_hdr *from)
{
ASSERT(from->magic == XFS_DA_NODE_MAGIC);
to->hdr.info.forw = cpu_to_be32(from->forw);
to->hdr.info.back = cpu_to_be32(from->back);
to->hdr.info.magic = cpu_to_be16(from->magic);
to->hdr.__count = cpu_to_be16(from->count);
to->hdr.__level = cpu_to_be16(from->level);
}
static void
xfs_da3_node_hdr_from_disk(
struct xfs_da3_icnode_hdr *to,
struct xfs_da_intnode *from)
{
struct xfs_da3_node_hdr *hdr3 = (struct xfs_da3_node_hdr *)from;
ASSERT(from->hdr.info.magic == cpu_to_be16(XFS_DA3_NODE_MAGIC));
to->forw = be32_to_cpu(hdr3->info.hdr.forw);
to->back = be32_to_cpu(hdr3->info.hdr.back);
to->magic = be16_to_cpu(hdr3->info.hdr.magic);
to->count = be16_to_cpu(hdr3->__count);
to->level = be16_to_cpu(hdr3->__level);
}
static void
xfs_da3_node_hdr_to_disk(
struct xfs_da_intnode *to,
struct xfs_da3_icnode_hdr *from)
{
struct xfs_da3_node_hdr *hdr3 = (struct xfs_da3_node_hdr *)to;
ASSERT(from->magic == XFS_DA3_NODE_MAGIC);
hdr3->info.hdr.forw = cpu_to_be32(from->forw);
hdr3->info.hdr.back = cpu_to_be32(from->back);
hdr3->info.hdr.magic = cpu_to_be16(from->magic);
hdr3->__count = cpu_to_be16(from->count);
hdr3->__level = cpu_to_be16(from->level);
}
/*
* Directory free space block operations
*/
static int
xfs_dir2_free_max_bests(struct xfs_da_geometry *geo)
{
return (geo->blksize - sizeof(struct xfs_dir2_free_hdr)) /
sizeof(xfs_dir2_data_off_t);
}
static __be16 *
xfs_dir2_free_bests_p(struct xfs_dir2_free *free)
{
return (__be16 *)((char *)free + sizeof(struct xfs_dir2_free_hdr));
}
/*
* Convert data space db to the corresponding free db.
*/
static xfs_dir2_db_t
xfs_dir2_db_to_fdb(struct xfs_da_geometry *geo, xfs_dir2_db_t db)
{
return xfs_dir2_byte_to_db(geo, XFS_DIR2_FREE_OFFSET) +
(db / xfs_dir2_free_max_bests(geo));
}
/*
* Convert data space db to the corresponding index in a free db.
*/
static int
xfs_dir2_db_to_fdindex(struct xfs_da_geometry *geo, xfs_dir2_db_t db)
{
return db % xfs_dir2_free_max_bests(geo);
}
static int
xfs_dir3_free_max_bests(struct xfs_da_geometry *geo)
{
return (geo->blksize - sizeof(struct xfs_dir3_free_hdr)) /
sizeof(xfs_dir2_data_off_t);
}
static __be16 *
xfs_dir3_free_bests_p(struct xfs_dir2_free *free)
{
return (__be16 *)((char *)free + sizeof(struct xfs_dir3_free_hdr));
}
/*
* Convert data space db to the corresponding free db.
*/
static xfs_dir2_db_t
xfs_dir3_db_to_fdb(struct xfs_da_geometry *geo, xfs_dir2_db_t db)
{
return xfs_dir2_byte_to_db(geo, XFS_DIR2_FREE_OFFSET) +
(db / xfs_dir3_free_max_bests(geo));
}
/*
* Convert data space db to the corresponding index in a free db.
*/
static int
xfs_dir3_db_to_fdindex(struct xfs_da_geometry *geo, xfs_dir2_db_t db)
{
return db % xfs_dir3_free_max_bests(geo);
}
static void
xfs_dir2_free_hdr_from_disk(
struct xfs_dir3_icfree_hdr *to,
struct xfs_dir2_free *from)
{
to->magic = be32_to_cpu(from->hdr.magic);
to->firstdb = be32_to_cpu(from->hdr.firstdb);
to->nvalid = be32_to_cpu(from->hdr.nvalid);
to->nused = be32_to_cpu(from->hdr.nused);
ASSERT(to->magic == XFS_DIR2_FREE_MAGIC);
}
static void
xfs_dir2_free_hdr_to_disk(
struct xfs_dir2_free *to,
struct xfs_dir3_icfree_hdr *from)
{
ASSERT(from->magic == XFS_DIR2_FREE_MAGIC);
to->hdr.magic = cpu_to_be32(from->magic);
to->hdr.firstdb = cpu_to_be32(from->firstdb);
to->hdr.nvalid = cpu_to_be32(from->nvalid);
to->hdr.nused = cpu_to_be32(from->nused);
}
static void
xfs_dir3_free_hdr_from_disk(
struct xfs_dir3_icfree_hdr *to,
struct xfs_dir2_free *from)
{
struct xfs_dir3_free_hdr *hdr3 = (struct xfs_dir3_free_hdr *)from;
to->magic = be32_to_cpu(hdr3->hdr.magic);
to->firstdb = be32_to_cpu(hdr3->firstdb);
to->nvalid = be32_to_cpu(hdr3->nvalid);
to->nused = be32_to_cpu(hdr3->nused);
ASSERT(to->magic == XFS_DIR3_FREE_MAGIC);
}
static void
xfs_dir3_free_hdr_to_disk(
struct xfs_dir2_free *to,
struct xfs_dir3_icfree_hdr *from)
{
struct xfs_dir3_free_hdr *hdr3 = (struct xfs_dir3_free_hdr *)to;
ASSERT(from->magic == XFS_DIR3_FREE_MAGIC);
hdr3->hdr.magic = cpu_to_be32(from->magic);
hdr3->firstdb = cpu_to_be32(from->firstdb);
hdr3->nvalid = cpu_to_be32(from->nvalid);
hdr3->nused = cpu_to_be32(from->nused);
}
static const struct xfs_dir_ops xfs_dir2_ops = {
.sf_entsize = xfs_dir2_sf_entsize,
.sf_nextentry = xfs_dir2_sf_nextentry,
.sf_get_ftype = xfs_dir2_sfe_get_ftype,
.sf_put_ftype = xfs_dir2_sfe_put_ftype,
.sf_get_ino = xfs_dir2_sfe_get_ino,
.sf_put_ino = xfs_dir2_sfe_put_ino,
.sf_get_parent_ino = xfs_dir2_sf_get_parent_ino,
.sf_put_parent_ino = xfs_dir2_sf_put_parent_ino,
.data_entsize = xfs_dir2_data_entsize,
.data_get_ftype = xfs_dir2_data_get_ftype,
.data_put_ftype = xfs_dir2_data_put_ftype,
.data_entry_tag_p = xfs_dir2_data_entry_tag_p,
.data_bestfree_p = xfs_dir2_data_bestfree_p,
.data_dot_offset = sizeof(struct xfs_dir2_data_hdr),
.data_dotdot_offset = sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR2_DATA_ENTSIZE(1),
.data_first_offset = sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR2_DATA_ENTSIZE(1) +
XFS_DIR2_DATA_ENTSIZE(2),
.data_entry_offset = sizeof(struct xfs_dir2_data_hdr),
.data_dot_entry_p = xfs_dir2_data_dot_entry_p,
.data_dotdot_entry_p = xfs_dir2_data_dotdot_entry_p,
.data_first_entry_p = xfs_dir2_data_first_entry_p,
.data_entry_p = xfs_dir2_data_entry_p,
.data_unused_p = xfs_dir2_data_unused_p,
.leaf_hdr_size = sizeof(struct xfs_dir2_leaf_hdr),
.leaf_hdr_to_disk = xfs_dir2_leaf_hdr_to_disk,
.leaf_hdr_from_disk = xfs_dir2_leaf_hdr_from_disk,
.leaf_max_ents = xfs_dir2_max_leaf_ents,
.leaf_ents_p = xfs_dir2_leaf_ents_p,
.node_hdr_size = sizeof(struct xfs_da_node_hdr),
.node_hdr_to_disk = xfs_da2_node_hdr_to_disk,
.node_hdr_from_disk = xfs_da2_node_hdr_from_disk,
.node_tree_p = xfs_da2_node_tree_p,
.free_hdr_size = sizeof(struct xfs_dir2_free_hdr),
.free_hdr_to_disk = xfs_dir2_free_hdr_to_disk,
.free_hdr_from_disk = xfs_dir2_free_hdr_from_disk,
.free_max_bests = xfs_dir2_free_max_bests,
.free_bests_p = xfs_dir2_free_bests_p,
.db_to_fdb = xfs_dir2_db_to_fdb,
.db_to_fdindex = xfs_dir2_db_to_fdindex,
};
static const struct xfs_dir_ops xfs_dir2_ftype_ops = {
.sf_entsize = xfs_dir3_sf_entsize,
.sf_nextentry = xfs_dir3_sf_nextentry,
.sf_get_ftype = xfs_dir3_sfe_get_ftype,
.sf_put_ftype = xfs_dir3_sfe_put_ftype,
.sf_get_ino = xfs_dir3_sfe_get_ino,
.sf_put_ino = xfs_dir3_sfe_put_ino,
.sf_get_parent_ino = xfs_dir2_sf_get_parent_ino,
.sf_put_parent_ino = xfs_dir2_sf_put_parent_ino,
.data_entsize = xfs_dir3_data_entsize,
.data_get_ftype = xfs_dir3_data_get_ftype,
.data_put_ftype = xfs_dir3_data_put_ftype,
.data_entry_tag_p = xfs_dir3_data_entry_tag_p,
.data_bestfree_p = xfs_dir2_data_bestfree_p,
.data_dot_offset = sizeof(struct xfs_dir2_data_hdr),
.data_dotdot_offset = sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1),
.data_first_offset = sizeof(struct xfs_dir2_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1) +
XFS_DIR3_DATA_ENTSIZE(2),
.data_entry_offset = sizeof(struct xfs_dir2_data_hdr),
.data_dot_entry_p = xfs_dir2_data_dot_entry_p,
.data_dotdot_entry_p = xfs_dir2_ftype_data_dotdot_entry_p,
.data_first_entry_p = xfs_dir2_ftype_data_first_entry_p,
.data_entry_p = xfs_dir2_data_entry_p,
.data_unused_p = xfs_dir2_data_unused_p,
.leaf_hdr_size = sizeof(struct xfs_dir2_leaf_hdr),
.leaf_hdr_to_disk = xfs_dir2_leaf_hdr_to_disk,
.leaf_hdr_from_disk = xfs_dir2_leaf_hdr_from_disk,
.leaf_max_ents = xfs_dir2_max_leaf_ents,
.leaf_ents_p = xfs_dir2_leaf_ents_p,
.node_hdr_size = sizeof(struct xfs_da_node_hdr),
.node_hdr_to_disk = xfs_da2_node_hdr_to_disk,
.node_hdr_from_disk = xfs_da2_node_hdr_from_disk,
.node_tree_p = xfs_da2_node_tree_p,
.free_hdr_size = sizeof(struct xfs_dir2_free_hdr),
.free_hdr_to_disk = xfs_dir2_free_hdr_to_disk,
.free_hdr_from_disk = xfs_dir2_free_hdr_from_disk,
.free_max_bests = xfs_dir2_free_max_bests,
.free_bests_p = xfs_dir2_free_bests_p,
.db_to_fdb = xfs_dir2_db_to_fdb,
.db_to_fdindex = xfs_dir2_db_to_fdindex,
};
static const struct xfs_dir_ops xfs_dir3_ops = {
.sf_entsize = xfs_dir3_sf_entsize,
.sf_nextentry = xfs_dir3_sf_nextentry,
.sf_get_ftype = xfs_dir3_sfe_get_ftype,
.sf_put_ftype = xfs_dir3_sfe_put_ftype,
.sf_get_ino = xfs_dir3_sfe_get_ino,
.sf_put_ino = xfs_dir3_sfe_put_ino,
.sf_get_parent_ino = xfs_dir2_sf_get_parent_ino,
.sf_put_parent_ino = xfs_dir2_sf_put_parent_ino,
.data_entsize = xfs_dir3_data_entsize,
.data_get_ftype = xfs_dir3_data_get_ftype,
.data_put_ftype = xfs_dir3_data_put_ftype,
.data_entry_tag_p = xfs_dir3_data_entry_tag_p,
.data_bestfree_p = xfs_dir3_data_bestfree_p,
.data_dot_offset = sizeof(struct xfs_dir3_data_hdr),
.data_dotdot_offset = sizeof(struct xfs_dir3_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1),
.data_first_offset = sizeof(struct xfs_dir3_data_hdr) +
XFS_DIR3_DATA_ENTSIZE(1) +
XFS_DIR3_DATA_ENTSIZE(2),
.data_entry_offset = sizeof(struct xfs_dir3_data_hdr),
.data_dot_entry_p = xfs_dir3_data_dot_entry_p,
.data_dotdot_entry_p = xfs_dir3_data_dotdot_entry_p,
.data_first_entry_p = xfs_dir3_data_first_entry_p,
.data_entry_p = xfs_dir3_data_entry_p,
.data_unused_p = xfs_dir3_data_unused_p,
.leaf_hdr_size = sizeof(struct xfs_dir3_leaf_hdr),
.leaf_hdr_to_disk = xfs_dir3_leaf_hdr_to_disk,
.leaf_hdr_from_disk = xfs_dir3_leaf_hdr_from_disk,
.leaf_max_ents = xfs_dir3_max_leaf_ents,
.leaf_ents_p = xfs_dir3_leaf_ents_p,
.node_hdr_size = sizeof(struct xfs_da3_node_hdr),
.node_hdr_to_disk = xfs_da3_node_hdr_to_disk,
.node_hdr_from_disk = xfs_da3_node_hdr_from_disk,
.node_tree_p = xfs_da3_node_tree_p,
.free_hdr_size = sizeof(struct xfs_dir3_free_hdr),
.free_hdr_to_disk = xfs_dir3_free_hdr_to_disk,
.free_hdr_from_disk = xfs_dir3_free_hdr_from_disk,
.free_max_bests = xfs_dir3_free_max_bests,
.free_bests_p = xfs_dir3_free_bests_p,
.db_to_fdb = xfs_dir3_db_to_fdb,
.db_to_fdindex = xfs_dir3_db_to_fdindex,
};
static const struct xfs_dir_ops xfs_dir2_nondir_ops = {
.node_hdr_size = sizeof(struct xfs_da_node_hdr),
.node_hdr_to_disk = xfs_da2_node_hdr_to_disk,
.node_hdr_from_disk = xfs_da2_node_hdr_from_disk,
.node_tree_p = xfs_da2_node_tree_p,
};
static const struct xfs_dir_ops xfs_dir3_nondir_ops = {
.node_hdr_size = sizeof(struct xfs_da3_node_hdr),
.node_hdr_to_disk = xfs_da3_node_hdr_to_disk,
.node_hdr_from_disk = xfs_da3_node_hdr_from_disk,
.node_tree_p = xfs_da3_node_tree_p,
};
/*
* Return the ops structure according to the current config. If we are passed
* an inode, then that overrides the default config we use which is based on
* feature bits.
*/
const struct xfs_dir_ops *
xfs_dir_get_ops(
struct xfs_mount *mp,
struct xfs_inode *dp)
{
if (dp)
return dp->d_ops;
if (mp->m_dir_inode_ops)
return mp->m_dir_inode_ops;
if (xfs_sb_version_hascrc(&mp->m_sb))
return &xfs_dir3_ops;
if (xfs_sb_version_hasftype(&mp->m_sb))
return &xfs_dir2_ftype_ops;
return &xfs_dir2_ops;
}
const struct xfs_dir_ops *
xfs_nondir_get_ops(
struct xfs_mount *mp,
struct xfs_inode *dp)
{
if (dp)
return dp->d_ops;
if (mp->m_nondir_inode_ops)
return mp->m_nondir_inode_ops;
if (xfs_sb_version_hascrc(&mp->m_sb))
return &xfs_dir3_nondir_ops;
return &xfs_dir2_nondir_ops;
}

View File

@ -93,19 +93,6 @@ struct xfs_da3_intnode {
struct xfs_da_node_entry __btree[]; struct xfs_da_node_entry __btree[];
}; };
/*
* In-core version of the node header to abstract the differences in the v2 and
* v3 disk format of the headers. Callers need to convert to/from disk format as
* appropriate.
*/
struct xfs_da3_icnode_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t level;
};
/* /*
* Directory version 2. * Directory version 2.
* *
@ -434,14 +421,6 @@ struct xfs_dir3_leaf_hdr {
__be32 pad; /* 64 bit alignment */ __be32 pad; /* 64 bit alignment */
}; };
struct xfs_dir3_icleaf_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t stale;
};
/* /*
* Leaf block entry. * Leaf block entry.
*/ */
@ -482,7 +461,7 @@ xfs_dir2_leaf_bests_p(struct xfs_dir2_leaf_tail *ltp)
} }
/* /*
* Free space block defintions for the node format. * Free space block definitions for the node format.
*/ */
/* /*
@ -520,19 +499,6 @@ struct xfs_dir3_free {
#define XFS_DIR3_FREE_CRC_OFF offsetof(struct xfs_dir3_free, hdr.hdr.crc) #define XFS_DIR3_FREE_CRC_OFF offsetof(struct xfs_dir3_free, hdr.hdr.crc)
/*
* In core version of the free block header, abstracted away from on-disk format
* differences. Use this in the code, and convert to/from the disk version using
* xfs_dir3_free_hdr_from_disk/xfs_dir3_free_hdr_to_disk.
*/
struct xfs_dir3_icfree_hdr {
uint32_t magic;
uint32_t firstdb;
uint32_t nvalid;
uint32_t nused;
};
/* /*
* Single block format. * Single block format.
* *
@ -709,29 +675,6 @@ struct xfs_attr3_leafblock {
*/ */
}; };
/*
* incore, neutral version of the attribute leaf header
*/
struct xfs_attr3_icleaf_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t usedbytes;
/*
* firstused is 32-bit here instead of 16-bit like the on-disk variant
* to support maximum fsb size of 64k without overflow issues throughout
* the attr code. Instead, the overflow condition is handled on
* conversion to/from disk.
*/
uint32_t firstused;
__u8 holes;
struct {
uint16_t base;
uint16_t size;
} freemap[XFS_ATTR_LEAF_MAPSIZE];
};
/* /*
* Special value to represent fs block size in the leaf header firstused field. * Special value to represent fs block size in the leaf header firstused field.
* Only used when block size overflows the 2-bytes available on disk. * Only used when block size overflows the 2-bytes available on disk.

View File

@ -52,7 +52,7 @@ xfs_mode_to_ftype(
* ASCII case-insensitive (ie. A-Z) support for directories that was * ASCII case-insensitive (ie. A-Z) support for directories that was
* used in IRIX. * used in IRIX.
*/ */
STATIC xfs_dahash_t xfs_dahash_t
xfs_ascii_ci_hashname( xfs_ascii_ci_hashname(
struct xfs_name *name) struct xfs_name *name)
{ {
@ -65,14 +65,14 @@ xfs_ascii_ci_hashname(
return hash; return hash;
} }
STATIC enum xfs_dacmp enum xfs_dacmp
xfs_ascii_ci_compname( xfs_ascii_ci_compname(
struct xfs_da_args *args, struct xfs_da_args *args,
const unsigned char *name, const unsigned char *name,
int len) int len)
{ {
enum xfs_dacmp result; enum xfs_dacmp result;
int i; int i;
if (args->namelen != len) if (args->namelen != len)
return XFS_CMP_DIFFERENT; return XFS_CMP_DIFFERENT;
@ -89,26 +89,16 @@ xfs_ascii_ci_compname(
return result; return result;
} }
static const struct xfs_nameops xfs_ascii_ci_nameops = {
.hashname = xfs_ascii_ci_hashname,
.compname = xfs_ascii_ci_compname,
};
int int
xfs_da_mount( xfs_da_mount(
struct xfs_mount *mp) struct xfs_mount *mp)
{ {
struct xfs_da_geometry *dageo; struct xfs_da_geometry *dageo;
int nodehdr_size;
ASSERT(mp->m_sb.sb_versionnum & XFS_SB_VERSION_DIRV2BIT); ASSERT(mp->m_sb.sb_versionnum & XFS_SB_VERSION_DIRV2BIT);
ASSERT(xfs_dir2_dirblock_bytes(&mp->m_sb) <= XFS_MAX_BLOCKSIZE); ASSERT(xfs_dir2_dirblock_bytes(&mp->m_sb) <= XFS_MAX_BLOCKSIZE);
mp->m_dir_inode_ops = xfs_dir_get_ops(mp, NULL);
mp->m_nondir_inode_ops = xfs_nondir_get_ops(mp, NULL);
nodehdr_size = mp->m_dir_inode_ops->node_hdr_size;
mp->m_dir_geo = kmem_zalloc(sizeof(struct xfs_da_geometry), mp->m_dir_geo = kmem_zalloc(sizeof(struct xfs_da_geometry),
KM_MAYFAIL); KM_MAYFAIL);
mp->m_attr_geo = kmem_zalloc(sizeof(struct xfs_da_geometry), mp->m_attr_geo = kmem_zalloc(sizeof(struct xfs_da_geometry),
@ -125,6 +115,27 @@ xfs_da_mount(
dageo->fsblog = mp->m_sb.sb_blocklog; dageo->fsblog = mp->m_sb.sb_blocklog;
dageo->blksize = xfs_dir2_dirblock_bytes(&mp->m_sb); dageo->blksize = xfs_dir2_dirblock_bytes(&mp->m_sb);
dageo->fsbcount = 1 << mp->m_sb.sb_dirblklog; dageo->fsbcount = 1 << mp->m_sb.sb_dirblklog;
if (xfs_sb_version_hascrc(&mp->m_sb)) {
dageo->node_hdr_size = sizeof(struct xfs_da3_node_hdr);
dageo->leaf_hdr_size = sizeof(struct xfs_dir3_leaf_hdr);
dageo->free_hdr_size = sizeof(struct xfs_dir3_free_hdr);
dageo->data_entry_offset =
sizeof(struct xfs_dir3_data_hdr);
} else {
dageo->node_hdr_size = sizeof(struct xfs_da_node_hdr);
dageo->leaf_hdr_size = sizeof(struct xfs_dir2_leaf_hdr);
dageo->free_hdr_size = sizeof(struct xfs_dir2_free_hdr);
dageo->data_entry_offset =
sizeof(struct xfs_dir2_data_hdr);
}
dageo->leaf_max_ents = (dageo->blksize - dageo->leaf_hdr_size) /
sizeof(struct xfs_dir2_leaf_entry);
dageo->free_max_bests = (dageo->blksize - dageo->free_hdr_size) /
sizeof(xfs_dir2_data_off_t);
dageo->data_first_offset = dageo->data_entry_offset +
xfs_dir2_data_entsize(mp, 1) +
xfs_dir2_data_entsize(mp, 2);
/* /*
* Now we've set up the block conversion variables, we can calculate the * Now we've set up the block conversion variables, we can calculate the
@ -133,7 +144,7 @@ xfs_da_mount(
dageo->datablk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_DATA_OFFSET); dageo->datablk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_DATA_OFFSET);
dageo->leafblk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_LEAF_OFFSET); dageo->leafblk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_LEAF_OFFSET);
dageo->freeblk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_FREE_OFFSET); dageo->freeblk = xfs_dir2_byte_to_da(dageo, XFS_DIR2_FREE_OFFSET);
dageo->node_ents = (dageo->blksize - nodehdr_size) / dageo->node_ents = (dageo->blksize - dageo->node_hdr_size) /
(uint)sizeof(xfs_da_node_entry_t); (uint)sizeof(xfs_da_node_entry_t);
dageo->magicpct = (dageo->blksize * 37) / 100; dageo->magicpct = (dageo->blksize * 37) / 100;
@ -143,15 +154,10 @@ xfs_da_mount(
dageo->fsblog = mp->m_sb.sb_blocklog; dageo->fsblog = mp->m_sb.sb_blocklog;
dageo->blksize = 1 << dageo->blklog; dageo->blksize = 1 << dageo->blklog;
dageo->fsbcount = 1; dageo->fsbcount = 1;
dageo->node_ents = (dageo->blksize - nodehdr_size) / dageo->node_hdr_size = mp->m_dir_geo->node_hdr_size;
dageo->node_ents = (dageo->blksize - dageo->node_hdr_size) /
(uint)sizeof(xfs_da_node_entry_t); (uint)sizeof(xfs_da_node_entry_t);
dageo->magicpct = (dageo->blksize * 37) / 100; dageo->magicpct = (dageo->blksize * 37) / 100;
if (xfs_sb_version_hasasciici(&mp->m_sb))
mp->m_dirnameops = &xfs_ascii_ci_nameops;
else
mp->m_dirnameops = &xfs_default_nameops;
return 0; return 0;
} }
@ -191,10 +197,10 @@ xfs_dir_ino_validate(
{ {
bool ino_ok = xfs_verify_dir_ino(mp, ino); bool ino_ok = xfs_verify_dir_ino(mp, ino);
if (unlikely(XFS_TEST_ERROR(!ino_ok, mp, XFS_ERRTAG_DIR_INO_VALIDATE))) { if (XFS_IS_CORRUPT(mp, !ino_ok) ||
XFS_TEST_ERROR(false, mp, XFS_ERRTAG_DIR_INO_VALIDATE)) {
xfs_warn(mp, "Invalid inode number 0x%Lx", xfs_warn(mp, "Invalid inode number 0x%Lx",
(unsigned long long) ino); (unsigned long long) ino);
XFS_ERROR_REPORT("xfs_dir_ino_validate", XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;
@ -262,7 +268,7 @@ xfs_dir_createname(
args->name = name->name; args->name = name->name;
args->namelen = name->len; args->namelen = name->len;
args->filetype = name->type; args->filetype = name->type;
args->hashval = dp->i_mount->m_dirnameops->hashname(name); args->hashval = xfs_dir2_hashname(dp->i_mount, name);
args->inumber = inum; args->inumber = inum;
args->dp = dp; args->dp = dp;
args->total = total; args->total = total;
@ -358,7 +364,7 @@ xfs_dir_lookup(
args->name = name->name; args->name = name->name;
args->namelen = name->len; args->namelen = name->len;
args->filetype = name->type; args->filetype = name->type;
args->hashval = dp->i_mount->m_dirnameops->hashname(name); args->hashval = xfs_dir2_hashname(dp->i_mount, name);
args->dp = dp; args->dp = dp;
args->whichfork = XFS_DATA_FORK; args->whichfork = XFS_DATA_FORK;
args->trans = tp; args->trans = tp;
@ -430,7 +436,7 @@ xfs_dir_removename(
args->name = name->name; args->name = name->name;
args->namelen = name->len; args->namelen = name->len;
args->filetype = name->type; args->filetype = name->type;
args->hashval = dp->i_mount->m_dirnameops->hashname(name); args->hashval = xfs_dir2_hashname(dp->i_mount, name);
args->inumber = ino; args->inumber = ino;
args->dp = dp; args->dp = dp;
args->total = total; args->total = total;
@ -491,7 +497,7 @@ xfs_dir_replace(
args->name = name->name; args->name = name->name;
args->namelen = name->len; args->namelen = name->len;
args->filetype = name->type; args->filetype = name->type;
args->hashval = dp->i_mount->m_dirnameops->hashname(name); args->hashval = xfs_dir2_hashname(dp->i_mount, name);
args->inumber = inum; args->inumber = inum;
args->dp = dp; args->dp = dp;
args->total = total; args->total = total;
@ -600,7 +606,9 @@ xfs_dir2_isblock(
if ((rval = xfs_bmap_last_offset(args->dp, &last, XFS_DATA_FORK))) if ((rval = xfs_bmap_last_offset(args->dp, &last, XFS_DATA_FORK)))
return rval; return rval;
rval = XFS_FSB_TO_B(args->dp->i_mount, last) == args->geo->blksize; rval = XFS_FSB_TO_B(args->dp->i_mount, last) == args->geo->blksize;
if (rval != 0 && args->dp->i_d.di_size != args->geo->blksize) if (XFS_IS_CORRUPT(args->dp->i_mount,
rval != 0 &&
args->dp->i_d.di_size != args->geo->blksize))
return -EFSCORRUPTED; return -EFSCORRUPTED;
*vp = rval; *vp = rval;
return 0; return 0;

View File

@ -18,6 +18,8 @@ struct xfs_dir2_sf_entry;
struct xfs_dir2_data_hdr; struct xfs_dir2_data_hdr;
struct xfs_dir2_data_entry; struct xfs_dir2_data_entry;
struct xfs_dir2_data_unused; struct xfs_dir2_data_unused;
struct xfs_dir3_icfree_hdr;
struct xfs_dir3_icleaf_hdr;
extern struct xfs_name xfs_name_dotdot; extern struct xfs_name xfs_name_dotdot;
@ -26,85 +28,6 @@ extern struct xfs_name xfs_name_dotdot;
*/ */
extern unsigned char xfs_mode_to_ftype(int mode); extern unsigned char xfs_mode_to_ftype(int mode);
/*
* directory operations vector for encode/decode routines
*/
struct xfs_dir_ops {
int (*sf_entsize)(struct xfs_dir2_sf_hdr *hdr, int len);
struct xfs_dir2_sf_entry *
(*sf_nextentry)(struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep);
uint8_t (*sf_get_ftype)(struct xfs_dir2_sf_entry *sfep);
void (*sf_put_ftype)(struct xfs_dir2_sf_entry *sfep,
uint8_t ftype);
xfs_ino_t (*sf_get_ino)(struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep);
void (*sf_put_ino)(struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep,
xfs_ino_t ino);
xfs_ino_t (*sf_get_parent_ino)(struct xfs_dir2_sf_hdr *hdr);
void (*sf_put_parent_ino)(struct xfs_dir2_sf_hdr *hdr,
xfs_ino_t ino);
int (*data_entsize)(int len);
uint8_t (*data_get_ftype)(struct xfs_dir2_data_entry *dep);
void (*data_put_ftype)(struct xfs_dir2_data_entry *dep,
uint8_t ftype);
__be16 * (*data_entry_tag_p)(struct xfs_dir2_data_entry *dep);
struct xfs_dir2_data_free *
(*data_bestfree_p)(struct xfs_dir2_data_hdr *hdr);
xfs_dir2_data_aoff_t data_dot_offset;
xfs_dir2_data_aoff_t data_dotdot_offset;
xfs_dir2_data_aoff_t data_first_offset;
size_t data_entry_offset;
struct xfs_dir2_data_entry *
(*data_dot_entry_p)(struct xfs_dir2_data_hdr *hdr);
struct xfs_dir2_data_entry *
(*data_dotdot_entry_p)(struct xfs_dir2_data_hdr *hdr);
struct xfs_dir2_data_entry *
(*data_first_entry_p)(struct xfs_dir2_data_hdr *hdr);
struct xfs_dir2_data_entry *
(*data_entry_p)(struct xfs_dir2_data_hdr *hdr);
struct xfs_dir2_data_unused *
(*data_unused_p)(struct xfs_dir2_data_hdr *hdr);
int leaf_hdr_size;
void (*leaf_hdr_to_disk)(struct xfs_dir2_leaf *to,
struct xfs_dir3_icleaf_hdr *from);
void (*leaf_hdr_from_disk)(struct xfs_dir3_icleaf_hdr *to,
struct xfs_dir2_leaf *from);
int (*leaf_max_ents)(struct xfs_da_geometry *geo);
struct xfs_dir2_leaf_entry *
(*leaf_ents_p)(struct xfs_dir2_leaf *lp);
int node_hdr_size;
void (*node_hdr_to_disk)(struct xfs_da_intnode *to,
struct xfs_da3_icnode_hdr *from);
void (*node_hdr_from_disk)(struct xfs_da3_icnode_hdr *to,
struct xfs_da_intnode *from);
struct xfs_da_node_entry *
(*node_tree_p)(struct xfs_da_intnode *dap);
int free_hdr_size;
void (*free_hdr_to_disk)(struct xfs_dir2_free *to,
struct xfs_dir3_icfree_hdr *from);
void (*free_hdr_from_disk)(struct xfs_dir3_icfree_hdr *to,
struct xfs_dir2_free *from);
int (*free_max_bests)(struct xfs_da_geometry *geo);
__be16 * (*free_bests_p)(struct xfs_dir2_free *free);
xfs_dir2_db_t (*db_to_fdb)(struct xfs_da_geometry *geo,
xfs_dir2_db_t db);
int (*db_to_fdindex)(struct xfs_da_geometry *geo,
xfs_dir2_db_t db);
};
extern const struct xfs_dir_ops *
xfs_dir_get_ops(struct xfs_mount *mp, struct xfs_inode *dp);
extern const struct xfs_dir_ops *
xfs_nondir_get_ops(struct xfs_mount *mp, struct xfs_inode *dp);
/* /*
* Generic directory interface routines * Generic directory interface routines
*/ */
@ -124,6 +47,8 @@ extern int xfs_dir_lookup(struct xfs_trans *tp, struct xfs_inode *dp,
extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp, extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t ino, struct xfs_name *name, xfs_ino_t ino,
xfs_extlen_t tot); xfs_extlen_t tot);
extern bool xfs_dir2_sf_replace_needblock(struct xfs_inode *dp,
xfs_ino_t inum);
extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp, extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t inum, struct xfs_name *name, xfs_ino_t inum,
xfs_extlen_t tot); xfs_extlen_t tot);
@ -143,10 +68,7 @@ extern int xfs_dir2_isleaf(struct xfs_da_args *args, int *r);
extern int xfs_dir2_shrink_inode(struct xfs_da_args *args, xfs_dir2_db_t db, extern int xfs_dir2_shrink_inode(struct xfs_da_args *args, xfs_dir2_db_t db,
struct xfs_buf *bp); struct xfs_buf *bp);
extern void xfs_dir2_data_freescan_int(struct xfs_da_geometry *geo, extern void xfs_dir2_data_freescan(struct xfs_mount *mp,
const struct xfs_dir_ops *ops,
struct xfs_dir2_data_hdr *hdr, int *loghead);
extern void xfs_dir2_data_freescan(struct xfs_inode *dp,
struct xfs_dir2_data_hdr *hdr, int *loghead); struct xfs_dir2_data_hdr *hdr, int *loghead);
extern void xfs_dir2_data_log_entry(struct xfs_da_args *args, extern void xfs_dir2_data_log_entry(struct xfs_da_args *args,
struct xfs_buf *bp, struct xfs_dir2_data_entry *dep); struct xfs_buf *bp, struct xfs_dir2_data_entry *dep);
@ -324,7 +246,7 @@ xfs_dir2_leaf_tail_p(struct xfs_da_geometry *geo, struct xfs_dir2_leaf *lp)
#define XFS_READDIR_BUFSIZE (32768) #define XFS_READDIR_BUFSIZE (32768)
unsigned char xfs_dir3_get_dtype(struct xfs_mount *mp, uint8_t filetype); unsigned char xfs_dir3_get_dtype(struct xfs_mount *mp, uint8_t filetype);
void *xfs_dir3_data_endp(struct xfs_da_geometry *geo, unsigned int xfs_dir3_data_end_offset(struct xfs_da_geometry *geo,
struct xfs_dir2_data_hdr *hdr); struct xfs_dir2_data_hdr *hdr);
bool xfs_dir2_namecheck(const void *name, size_t length); bool xfs_dir2_namecheck(const void *name, size_t length);

View File

@ -123,7 +123,7 @@ xfs_dir3_block_read(
struct xfs_mount *mp = dp->i_mount; struct xfs_mount *mp = dp->i_mount;
int err; int err;
err = xfs_da_read_buf(tp, dp, mp->m_dir_geo->datablk, -1, bpp, err = xfs_da_read_buf(tp, dp, mp->m_dir_geo->datablk, 0, bpp,
XFS_DATA_FORK, &xfs_dir3_block_buf_ops); XFS_DATA_FORK, &xfs_dir3_block_buf_ops);
if (!err && tp && *bpp) if (!err && tp && *bpp)
xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_BLOCK_BUF); xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_BLOCK_BUF);
@ -172,7 +172,7 @@ xfs_dir2_block_need_space(
struct xfs_dir2_data_unused *enddup = NULL; struct xfs_dir2_data_unused *enddup = NULL;
*compact = 0; *compact = 0;
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
/* /*
* If there are stale entries we'll use one for the leaf. * If there are stale entries we'll use one for the leaf.
@ -311,7 +311,7 @@ xfs_dir2_block_compact(
* This needs to happen before the next call to use_free. * This needs to happen before the next call to use_free.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(args->dp, hdr, needlog); xfs_dir2_data_freescan(args->dp->i_mount, hdr, needlog);
} }
/* /*
@ -355,7 +355,7 @@ xfs_dir2_block_addname(
if (error) if (error)
return error; return error;
len = dp->d_ops->data_entsize(args->namelen); len = xfs_dir2_data_entsize(dp->i_mount, args->namelen);
/* /*
* Set up pointers to parts of the block. * Set up pointers to parts of the block.
@ -458,7 +458,7 @@ xfs_dir2_block_addname(
* This needs to happen before the next call to use_free. * This needs to happen before the next call to use_free.
*/ */
if (needscan) { if (needscan) {
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
needscan = 0; needscan = 0;
} }
/* /*
@ -541,14 +541,14 @@ xfs_dir2_block_addname(
dep->inumber = cpu_to_be64(args->inumber); dep->inumber = cpu_to_be64(args->inumber);
dep->namelen = args->namelen; dep->namelen = args->namelen;
memcpy(dep->name, args->name, args->namelen); memcpy(dep->name, args->name, args->namelen);
dp->d_ops->data_put_ftype(dep, args->filetype); xfs_dir2_data_put_ftype(dp->i_mount, dep, args->filetype);
tagp = dp->d_ops->data_entry_tag_p(dep); tagp = xfs_dir2_data_entry_tag_p(dp->i_mount, dep);
*tagp = cpu_to_be16((char *)dep - (char *)hdr); *tagp = cpu_to_be16((char *)dep - (char *)hdr);
/* /*
* Clean up the bestfree array and log the header, tail, and entry. * Clean up the bestfree array and log the header, tail, and entry.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
if (needlog) if (needlog)
xfs_dir2_data_log_header(args, bp); xfs_dir2_data_log_header(args, bp);
xfs_dir2_block_log_tail(tp, bp); xfs_dir2_block_log_tail(tp, bp);
@ -633,7 +633,7 @@ xfs_dir2_block_lookup(
* Fill in inode number, CI name if appropriate, release the block. * Fill in inode number, CI name if appropriate, release the block.
*/ */
args->inumber = be64_to_cpu(dep->inumber); args->inumber = be64_to_cpu(dep->inumber);
args->filetype = dp->d_ops->data_get_ftype(dep); args->filetype = xfs_dir2_data_get_ftype(dp->i_mount, dep);
error = xfs_dir_cilookup_result(args, dep->name, dep->namelen); error = xfs_dir_cilookup_result(args, dep->name, dep->namelen);
xfs_trans_brelse(args->trans, bp); xfs_trans_brelse(args->trans, bp);
return error; return error;
@ -660,13 +660,11 @@ xfs_dir2_block_lookup_int(
int high; /* binary search high index */ int high; /* binary search high index */
int low; /* binary search low index */ int low; /* binary search low index */
int mid; /* binary search current idx */ int mid; /* binary search current idx */
xfs_mount_t *mp; /* filesystem mount point */
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
enum xfs_dacmp cmp; /* comparison result */ enum xfs_dacmp cmp; /* comparison result */
dp = args->dp; dp = args->dp;
tp = args->trans; tp = args->trans;
mp = dp->i_mount;
error = xfs_dir3_block_read(tp, dp, &bp); error = xfs_dir3_block_read(tp, dp, &bp);
if (error) if (error)
@ -718,7 +716,7 @@ xfs_dir2_block_lookup_int(
* and buffer. If it's the first case-insensitive match, store * and buffer. If it's the first case-insensitive match, store
* the index and buffer and continue looking for an exact match. * the index and buffer and continue looking for an exact match.
*/ */
cmp = mp->m_dirnameops->compname(args, dep->name, dep->namelen); cmp = xfs_dir2_compname(args, dep->name, dep->namelen);
if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) { if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) {
args->cmpresult = cmp; args->cmpresult = cmp;
*bpp = bp; *bpp = bp;
@ -791,7 +789,8 @@ xfs_dir2_block_removename(
needlog = needscan = 0; needlog = needscan = 0;
xfs_dir2_data_make_free(args, bp, xfs_dir2_data_make_free(args, bp,
(xfs_dir2_data_aoff_t)((char *)dep - (char *)hdr), (xfs_dir2_data_aoff_t)((char *)dep - (char *)hdr),
dp->d_ops->data_entsize(dep->namelen), &needlog, &needscan); xfs_dir2_data_entsize(dp->i_mount, dep->namelen), &needlog,
&needscan);
/* /*
* Fix up the block tail. * Fix up the block tail.
*/ */
@ -806,7 +805,7 @@ xfs_dir2_block_removename(
* Fix up bestfree, log the header if necessary. * Fix up bestfree, log the header if necessary.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
if (needlog) if (needlog)
xfs_dir2_data_log_header(args, bp); xfs_dir2_data_log_header(args, bp);
xfs_dir3_data_check(dp, bp); xfs_dir3_data_check(dp, bp);
@ -864,7 +863,7 @@ xfs_dir2_block_replace(
* Change the inode number to the new value. * Change the inode number to the new value.
*/ */
dep->inumber = cpu_to_be64(args->inumber); dep->inumber = cpu_to_be64(args->inumber);
dp->d_ops->data_put_ftype(dep, args->filetype); xfs_dir2_data_put_ftype(dp->i_mount, dep, args->filetype);
xfs_dir2_data_log_entry(args, bp, dep); xfs_dir2_data_log_entry(args, bp, dep);
xfs_dir3_data_check(dp, bp); xfs_dir3_data_check(dp, bp);
return 0; return 0;
@ -914,7 +913,6 @@ xfs_dir2_leaf_to_block(
__be16 *tagp; /* end of entry (tag) */ __be16 *tagp; /* end of entry (tag) */
int to; /* block/leaf to index */ int to; /* block/leaf to index */
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
trace_xfs_dir2_leaf_to_block(args); trace_xfs_dir2_leaf_to_block(args);
@ -923,8 +921,7 @@ xfs_dir2_leaf_to_block(
tp = args->trans; tp = args->trans;
mp = dp->i_mount; mp = dp->i_mount;
leaf = lbp->b_addr; leaf = lbp->b_addr;
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf); xfs_dir2_leaf_hdr_from_disk(mp, &leafhdr, leaf);
ents = dp->d_ops->leaf_ents_p(leaf);
ltp = xfs_dir2_leaf_tail_p(args->geo, leaf); ltp = xfs_dir2_leaf_tail_p(args->geo, leaf);
ASSERT(leafhdr.magic == XFS_DIR2_LEAF1_MAGIC || ASSERT(leafhdr.magic == XFS_DIR2_LEAF1_MAGIC ||
@ -938,7 +935,7 @@ xfs_dir2_leaf_to_block(
while (dp->i_d.di_size > args->geo->blksize) { while (dp->i_d.di_size > args->geo->blksize) {
int hdrsz; int hdrsz;
hdrsz = dp->d_ops->data_entry_offset; hdrsz = args->geo->data_entry_offset;
bestsp = xfs_dir2_leaf_bests_p(ltp); bestsp = xfs_dir2_leaf_bests_p(ltp);
if (be16_to_cpu(bestsp[be32_to_cpu(ltp->bestcount) - 1]) == if (be16_to_cpu(bestsp[be32_to_cpu(ltp->bestcount) - 1]) ==
args->geo->blksize - hdrsz) { args->geo->blksize - hdrsz) {
@ -953,7 +950,7 @@ xfs_dir2_leaf_to_block(
* Read the data block if we don't already have it, give up if it fails. * Read the data block if we don't already have it, give up if it fails.
*/ */
if (!dbp) { if (!dbp) {
error = xfs_dir3_data_read(tp, dp, args->geo->datablk, -1, &dbp); error = xfs_dir3_data_read(tp, dp, args->geo->datablk, 0, &dbp);
if (error) if (error)
return error; return error;
} }
@ -1004,9 +1001,10 @@ xfs_dir2_leaf_to_block(
*/ */
lep = xfs_dir2_block_leaf_p(btp); lep = xfs_dir2_block_leaf_p(btp);
for (from = to = 0; from < leafhdr.count; from++) { for (from = to = 0; from < leafhdr.count; from++) {
if (ents[from].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR)) if (leafhdr.ents[from].address ==
cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
continue; continue;
lep[to++] = ents[from]; lep[to++] = leafhdr.ents[from];
} }
ASSERT(to == be32_to_cpu(btp->count)); ASSERT(to == be32_to_cpu(btp->count));
xfs_dir2_block_log_leaf(tp, dbp, 0, be32_to_cpu(btp->count) - 1); xfs_dir2_block_log_leaf(tp, dbp, 0, be32_to_cpu(btp->count) - 1);
@ -1014,7 +1012,7 @@ xfs_dir2_leaf_to_block(
* Scan the bestfree if we need it and log the data block header. * Scan the bestfree if we need it and log the data block header.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
if (needlog) if (needlog)
xfs_dir2_data_log_header(args, dbp); xfs_dir2_data_log_header(args, dbp);
/* /*
@ -1039,47 +1037,38 @@ xfs_dir2_leaf_to_block(
*/ */
int /* error */ int /* error */
xfs_dir2_sf_to_block( xfs_dir2_sf_to_block(
xfs_da_args_t *args) /* operation arguments */ struct xfs_da_args *args)
{ {
struct xfs_trans *tp = args->trans;
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
struct xfs_ifork *ifp = XFS_IFORK_PTR(dp, XFS_DATA_FORK);
struct xfs_da_geometry *geo = args->geo;
xfs_dir2_db_t blkno; /* dir-relative block # (0) */ xfs_dir2_db_t blkno; /* dir-relative block # (0) */
xfs_dir2_data_hdr_t *hdr; /* block header */ xfs_dir2_data_hdr_t *hdr; /* block header */
xfs_dir2_leaf_entry_t *blp; /* block leaf entries */ xfs_dir2_leaf_entry_t *blp; /* block leaf entries */
struct xfs_buf *bp; /* block buffer */ struct xfs_buf *bp; /* block buffer */
xfs_dir2_block_tail_t *btp; /* block tail pointer */ xfs_dir2_block_tail_t *btp; /* block tail pointer */
xfs_dir2_data_entry_t *dep; /* data entry pointer */ xfs_dir2_data_entry_t *dep; /* data entry pointer */
xfs_inode_t *dp; /* incore directory inode */
int dummy; /* trash */ int dummy; /* trash */
xfs_dir2_data_unused_t *dup; /* unused entry pointer */ xfs_dir2_data_unused_t *dup; /* unused entry pointer */
int endoffset; /* end of data objects */ int endoffset; /* end of data objects */
int error; /* error return value */ int error; /* error return value */
int i; /* index */ int i; /* index */
xfs_mount_t *mp; /* filesystem mount point */
int needlog; /* need to log block header */ int needlog; /* need to log block header */
int needscan; /* need to scan block freespc */ int needscan; /* need to scan block freespc */
int newoffset; /* offset from current entry */ int newoffset; /* offset from current entry */
int offset; /* target block offset */ unsigned int offset = geo->data_entry_offset;
xfs_dir2_sf_entry_t *sfep; /* sf entry pointer */ xfs_dir2_sf_entry_t *sfep; /* sf entry pointer */
xfs_dir2_sf_hdr_t *oldsfp; /* old shortform header */ xfs_dir2_sf_hdr_t *oldsfp; /* old shortform header */
xfs_dir2_sf_hdr_t *sfp; /* shortform header */ xfs_dir2_sf_hdr_t *sfp; /* shortform header */
__be16 *tagp; /* end of data entry */ __be16 *tagp; /* end of data entry */
xfs_trans_t *tp; /* transaction pointer */
struct xfs_name name; struct xfs_name name;
struct xfs_ifork *ifp;
trace_xfs_dir2_sf_to_block(args); trace_xfs_dir2_sf_to_block(args);
dp = args->dp;
tp = args->trans;
mp = dp->i_mount;
ifp = XFS_IFORK_PTR(dp, XFS_DATA_FORK);
ASSERT(ifp->if_flags & XFS_IFINLINE); ASSERT(ifp->if_flags & XFS_IFINLINE);
/* ASSERT(dp->i_d.di_size >= offsetof(struct xfs_dir2_sf_hdr, parent));
* Bomb out if the shortform directory is way too short.
*/
if (dp->i_d.di_size < offsetof(xfs_dir2_sf_hdr_t, parent)) {
ASSERT(XFS_FORCED_SHUTDOWN(mp));
return -EIO;
}
oldsfp = (xfs_dir2_sf_hdr_t *)ifp->if_u1.if_data; oldsfp = (xfs_dir2_sf_hdr_t *)ifp->if_u1.if_data;
@ -1123,7 +1112,7 @@ xfs_dir2_sf_to_block(
* The whole thing is initialized to free by the init routine. * The whole thing is initialized to free by the init routine.
* Say we're using the leaf and tail area. * Say we're using the leaf and tail area.
*/ */
dup = dp->d_ops->data_unused_p(hdr); dup = bp->b_addr + offset;
needlog = needscan = 0; needlog = needscan = 0;
error = xfs_dir2_data_use_free(args, bp, dup, args->geo->blksize - i, error = xfs_dir2_data_use_free(args, bp, dup, args->geo->blksize - i,
i, &needlog, &needscan); i, &needlog, &needscan);
@ -1146,35 +1135,37 @@ xfs_dir2_sf_to_block(
be16_to_cpu(dup->length), &needlog, &needscan); be16_to_cpu(dup->length), &needlog, &needscan);
if (error) if (error)
goto out_free; goto out_free;
/* /*
* Create entry for . * Create entry for .
*/ */
dep = dp->d_ops->data_dot_entry_p(hdr); dep = bp->b_addr + offset;
dep->inumber = cpu_to_be64(dp->i_ino); dep->inumber = cpu_to_be64(dp->i_ino);
dep->namelen = 1; dep->namelen = 1;
dep->name[0] = '.'; dep->name[0] = '.';
dp->d_ops->data_put_ftype(dep, XFS_DIR3_FT_DIR); xfs_dir2_data_put_ftype(mp, dep, XFS_DIR3_FT_DIR);
tagp = dp->d_ops->data_entry_tag_p(dep); tagp = xfs_dir2_data_entry_tag_p(mp, dep);
*tagp = cpu_to_be16((char *)dep - (char *)hdr); *tagp = cpu_to_be16(offset);
xfs_dir2_data_log_entry(args, bp, dep); xfs_dir2_data_log_entry(args, bp, dep);
blp[0].hashval = cpu_to_be32(xfs_dir_hash_dot); blp[0].hashval = cpu_to_be32(xfs_dir_hash_dot);
blp[0].address = cpu_to_be32(xfs_dir2_byte_to_dataptr( blp[0].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(offset));
(char *)dep - (char *)hdr)); offset += xfs_dir2_data_entsize(mp, dep->namelen);
/* /*
* Create entry for .. * Create entry for ..
*/ */
dep = dp->d_ops->data_dotdot_entry_p(hdr); dep = bp->b_addr + offset;
dep->inumber = cpu_to_be64(dp->d_ops->sf_get_parent_ino(sfp)); dep->inumber = cpu_to_be64(xfs_dir2_sf_get_parent_ino(sfp));
dep->namelen = 2; dep->namelen = 2;
dep->name[0] = dep->name[1] = '.'; dep->name[0] = dep->name[1] = '.';
dp->d_ops->data_put_ftype(dep, XFS_DIR3_FT_DIR); xfs_dir2_data_put_ftype(mp, dep, XFS_DIR3_FT_DIR);
tagp = dp->d_ops->data_entry_tag_p(dep); tagp = xfs_dir2_data_entry_tag_p(mp, dep);
*tagp = cpu_to_be16((char *)dep - (char *)hdr); *tagp = cpu_to_be16(offset);
xfs_dir2_data_log_entry(args, bp, dep); xfs_dir2_data_log_entry(args, bp, dep);
blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot); blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot);
blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr( blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(offset));
(char *)dep - (char *)hdr)); offset += xfs_dir2_data_entsize(mp, dep->namelen);
offset = dp->d_ops->data_first_offset;
/* /*
* Loop over existing entries, stuff them in. * Loop over existing entries, stuff them in.
*/ */
@ -1183,6 +1174,7 @@ xfs_dir2_sf_to_block(
sfep = NULL; sfep = NULL;
else else
sfep = xfs_dir2_sf_firstentry(sfp); sfep = xfs_dir2_sf_firstentry(sfp);
/* /*
* Need to preserve the existing offset values in the sf directory. * Need to preserve the existing offset values in the sf directory.
* Insert holes (unused entries) where necessary. * Insert holes (unused entries) where necessary.
@ -1199,40 +1191,39 @@ xfs_dir2_sf_to_block(
* There should be a hole here, make one. * There should be a hole here, make one.
*/ */
if (offset < newoffset) { if (offset < newoffset) {
dup = (xfs_dir2_data_unused_t *)((char *)hdr + offset); dup = bp->b_addr + offset;
dup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); dup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG);
dup->length = cpu_to_be16(newoffset - offset); dup->length = cpu_to_be16(newoffset - offset);
*xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16( *xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16(offset);
((char *)dup - (char *)hdr));
xfs_dir2_data_log_unused(args, bp, dup); xfs_dir2_data_log_unused(args, bp, dup);
xfs_dir2_data_freeinsert(hdr, xfs_dir2_data_freeinsert(hdr,
dp->d_ops->data_bestfree_p(hdr), xfs_dir2_data_bestfree_p(mp, hdr),
dup, &dummy); dup, &dummy);
offset += be16_to_cpu(dup->length); offset += be16_to_cpu(dup->length);
continue; continue;
} }
/* /*
* Copy a real entry. * Copy a real entry.
*/ */
dep = (xfs_dir2_data_entry_t *)((char *)hdr + newoffset); dep = bp->b_addr + newoffset;
dep->inumber = cpu_to_be64(dp->d_ops->sf_get_ino(sfp, sfep)); dep->inumber = cpu_to_be64(xfs_dir2_sf_get_ino(mp, sfp, sfep));
dep->namelen = sfep->namelen; dep->namelen = sfep->namelen;
dp->d_ops->data_put_ftype(dep, dp->d_ops->sf_get_ftype(sfep)); xfs_dir2_data_put_ftype(mp, dep,
xfs_dir2_sf_get_ftype(mp, sfep));
memcpy(dep->name, sfep->name, dep->namelen); memcpy(dep->name, sfep->name, dep->namelen);
tagp = dp->d_ops->data_entry_tag_p(dep); tagp = xfs_dir2_data_entry_tag_p(mp, dep);
*tagp = cpu_to_be16((char *)dep - (char *)hdr); *tagp = cpu_to_be16(newoffset);
xfs_dir2_data_log_entry(args, bp, dep); xfs_dir2_data_log_entry(args, bp, dep);
name.name = sfep->name; name.name = sfep->name;
name.len = sfep->namelen; name.len = sfep->namelen;
blp[2 + i].hashval = cpu_to_be32(mp->m_dirnameops-> blp[2 + i].hashval = cpu_to_be32(xfs_dir2_hashname(mp, &name));
hashname(&name)); blp[2 + i].address =
blp[2 + i].address = cpu_to_be32(xfs_dir2_byte_to_dataptr( cpu_to_be32(xfs_dir2_byte_to_dataptr(newoffset));
(char *)dep - (char *)hdr));
offset = (int)((char *)(tagp + 1) - (char *)hdr); offset = (int)((char *)(tagp + 1) - (char *)hdr);
if (++i == sfp->count) if (++i == sfp->count)
sfep = NULL; sfep = NULL;
else else
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
} }
/* Done with the temporary buffer */ /* Done with the temporary buffer */
kmem_free(sfp); kmem_free(sfp);

View File

@ -13,6 +13,7 @@
#include "xfs_mount.h" #include "xfs_mount.h"
#include "xfs_inode.h" #include "xfs_inode.h"
#include "xfs_dir2.h" #include "xfs_dir2.h"
#include "xfs_dir2_priv.h"
#include "xfs_error.h" #include "xfs_error.h"
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_buf_item.h" #include "xfs_buf_item.h"
@ -23,6 +24,71 @@ static xfs_failaddr_t xfs_dir2_data_freefind_verify(
struct xfs_dir2_data_unused *dup, struct xfs_dir2_data_unused *dup,
struct xfs_dir2_data_free **bf_ent); struct xfs_dir2_data_free **bf_ent);
struct xfs_dir2_data_free *
xfs_dir2_data_bestfree_p(
struct xfs_mount *mp,
struct xfs_dir2_data_hdr *hdr)
{
if (xfs_sb_version_hascrc(&mp->m_sb))
return ((struct xfs_dir3_data_hdr *)hdr)->best_free;
return hdr->bestfree;
}
/*
* Pointer to an entry's tag word.
*/
__be16 *
xfs_dir2_data_entry_tag_p(
struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep)
{
return (__be16 *)((char *)dep +
xfs_dir2_data_entsize(mp, dep->namelen) - sizeof(__be16));
}
uint8_t
xfs_dir2_data_get_ftype(
struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep)
{
if (xfs_sb_version_hasftype(&mp->m_sb)) {
uint8_t ftype = dep->name[dep->namelen];
if (likely(ftype < XFS_DIR3_FT_MAX))
return ftype;
}
return XFS_DIR3_FT_UNKNOWN;
}
void
xfs_dir2_data_put_ftype(
struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep,
uint8_t ftype)
{
ASSERT(ftype < XFS_DIR3_FT_MAX);
ASSERT(dep->namelen != 0);
if (xfs_sb_version_hasftype(&mp->m_sb))
dep->name[dep->namelen] = ftype;
}
/*
* The number of leaf entries is limited by the size of the block and the amount
* of space used by the data entries. We don't know how much space is used by
* the data entries yet, so just ensure that the count falls somewhere inside
* the block right now.
*/
static inline unsigned int
xfs_dir2_data_max_leaf_entries(
struct xfs_da_geometry *geo)
{
return (geo->blksize - sizeof(struct xfs_dir2_block_tail) -
geo->data_entry_offset) /
sizeof(struct xfs_dir2_leaf_entry);
}
/* /*
* Check the consistency of the data block. * Check the consistency of the data block.
* The input can also be a block-format directory. * The input can also be a block-format directory.
@ -38,40 +104,27 @@ __xfs_dir3_data_check(
xfs_dir2_block_tail_t *btp=NULL; /* block tail */ xfs_dir2_block_tail_t *btp=NULL; /* block tail */
int count; /* count of entries found */ int count; /* count of entries found */
xfs_dir2_data_hdr_t *hdr; /* data block header */ xfs_dir2_data_hdr_t *hdr; /* data block header */
xfs_dir2_data_entry_t *dep; /* data entry */
xfs_dir2_data_free_t *dfp; /* bestfree entry */ xfs_dir2_data_free_t *dfp; /* bestfree entry */
xfs_dir2_data_unused_t *dup; /* unused entry */
char *endp; /* end of useful data */
int freeseen; /* mask of bestfrees seen */ int freeseen; /* mask of bestfrees seen */
xfs_dahash_t hash; /* hash of current name */ xfs_dahash_t hash; /* hash of current name */
int i; /* leaf index */ int i; /* leaf index */
int lastfree; /* last entry was unused */ int lastfree; /* last entry was unused */
xfs_dir2_leaf_entry_t *lep=NULL; /* block leaf entries */ xfs_dir2_leaf_entry_t *lep=NULL; /* block leaf entries */
struct xfs_mount *mp = bp->b_mount; struct xfs_mount *mp = bp->b_mount;
char *p; /* current data position */
int stale; /* count of stale leaves */ int stale; /* count of stale leaves */
struct xfs_name name; struct xfs_name name;
const struct xfs_dir_ops *ops; unsigned int offset;
struct xfs_da_geometry *geo; unsigned int end;
struct xfs_da_geometry *geo = mp->m_dir_geo;
geo = mp->m_dir_geo;
/* /*
* We can be passed a null dp here from a verifier, so we need to go the * If this isn't a directory, something is seriously wrong. Bail out.
* hard way to get them.
*/ */
ops = xfs_dir_get_ops(mp, dp); if (dp && !S_ISDIR(VFS_I(dp)->i_mode))
/*
* If this isn't a directory, or we don't get handed the dir ops,
* something is seriously wrong. Bail out.
*/
if ((dp && !S_ISDIR(VFS_I(dp)->i_mode)) ||
ops != xfs_dir_get_ops(mp, NULL))
return __this_address; return __this_address;
hdr = bp->b_addr; hdr = bp->b_addr;
p = (char *)ops->data_entry_p(hdr); offset = geo->data_entry_offset;
switch (hdr->magic) { switch (hdr->magic) {
case cpu_to_be32(XFS_DIR3_BLOCK_MAGIC): case cpu_to_be32(XFS_DIR3_BLOCK_MAGIC):
@ -79,15 +132,8 @@ __xfs_dir3_data_check(
btp = xfs_dir2_block_tail_p(geo, hdr); btp = xfs_dir2_block_tail_p(geo, hdr);
lep = xfs_dir2_block_leaf_p(btp); lep = xfs_dir2_block_leaf_p(btp);
/*
* The number of leaf entries is limited by the size of the
* block and the amount of space used by the data entries.
* We don't know how much space is used by the data entries yet,
* so just ensure that the count falls somewhere inside the
* block right now.
*/
if (be32_to_cpu(btp->count) >= if (be32_to_cpu(btp->count) >=
((char *)btp - p) / sizeof(struct xfs_dir2_leaf_entry)) xfs_dir2_data_max_leaf_entries(geo))
return __this_address; return __this_address;
break; break;
case cpu_to_be32(XFS_DIR3_DATA_MAGIC): case cpu_to_be32(XFS_DIR3_DATA_MAGIC):
@ -96,14 +142,14 @@ __xfs_dir3_data_check(
default: default:
return __this_address; return __this_address;
} }
endp = xfs_dir3_data_endp(geo, hdr); end = xfs_dir3_data_end_offset(geo, hdr);
if (!endp) if (!end)
return __this_address; return __this_address;
/* /*
* Account for zero bestfree entries. * Account for zero bestfree entries.
*/ */
bf = ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(mp, hdr);
count = lastfree = freeseen = 0; count = lastfree = freeseen = 0;
if (!bf[0].length) { if (!bf[0].length) {
if (bf[0].offset) if (bf[0].offset)
@ -128,8 +174,10 @@ __xfs_dir3_data_check(
/* /*
* Loop over the data/unused entries. * Loop over the data/unused entries.
*/ */
while (p < endp) { while (offset < end) {
dup = (xfs_dir2_data_unused_t *)p; struct xfs_dir2_data_unused *dup = bp->b_addr + offset;
struct xfs_dir2_data_entry *dep = bp->b_addr + offset;
/* /*
* If it's unused, look for the space in the bestfree table. * If it's unused, look for the space in the bestfree table.
* If we find it, account for that, else make sure it * If we find it, account for that, else make sure it
@ -140,10 +188,10 @@ __xfs_dir3_data_check(
if (lastfree != 0) if (lastfree != 0)
return __this_address; return __this_address;
if (endp < p + be16_to_cpu(dup->length)) if (offset + be16_to_cpu(dup->length) > end)
return __this_address; return __this_address;
if (be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)) != if (be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)) !=
(char *)dup - (char *)hdr) offset)
return __this_address; return __this_address;
fa = xfs_dir2_data_freefind_verify(hdr, bf, dup, &dfp); fa = xfs_dir2_data_freefind_verify(hdr, bf, dup, &dfp);
if (fa) if (fa)
@ -158,7 +206,7 @@ __xfs_dir3_data_check(
be16_to_cpu(bf[2].length)) be16_to_cpu(bf[2].length))
return __this_address; return __this_address;
} }
p += be16_to_cpu(dup->length); offset += be16_to_cpu(dup->length);
lastfree = 1; lastfree = 1;
continue; continue;
} }
@ -168,17 +216,15 @@ __xfs_dir3_data_check(
* in the leaf section of the block. * in the leaf section of the block.
* The linear search is crude but this is DEBUG code. * The linear search is crude but this is DEBUG code.
*/ */
dep = (xfs_dir2_data_entry_t *)p;
if (dep->namelen == 0) if (dep->namelen == 0)
return __this_address; return __this_address;
if (xfs_dir_ino_validate(mp, be64_to_cpu(dep->inumber))) if (xfs_dir_ino_validate(mp, be64_to_cpu(dep->inumber)))
return __this_address; return __this_address;
if (endp < p + ops->data_entsize(dep->namelen)) if (offset + xfs_dir2_data_entsize(mp, dep->namelen) > end)
return __this_address; return __this_address;
if (be16_to_cpu(*ops->data_entry_tag_p(dep)) != if (be16_to_cpu(*xfs_dir2_data_entry_tag_p(mp, dep)) != offset)
(char *)dep - (char *)hdr)
return __this_address; return __this_address;
if (ops->data_get_ftype(dep) >= XFS_DIR3_FT_MAX) if (xfs_dir2_data_get_ftype(mp, dep) >= XFS_DIR3_FT_MAX)
return __this_address; return __this_address;
count++; count++;
lastfree = 0; lastfree = 0;
@ -189,7 +235,7 @@ __xfs_dir3_data_check(
((char *)dep - (char *)hdr)); ((char *)dep - (char *)hdr));
name.name = dep->name; name.name = dep->name;
name.len = dep->namelen; name.len = dep->namelen;
hash = mp->m_dirnameops->hashname(&name); hash = xfs_dir2_hashname(mp, &name);
for (i = 0; i < be32_to_cpu(btp->count); i++) { for (i = 0; i < be32_to_cpu(btp->count); i++) {
if (be32_to_cpu(lep[i].address) == addr && if (be32_to_cpu(lep[i].address) == addr &&
be32_to_cpu(lep[i].hashval) == hash) be32_to_cpu(lep[i].hashval) == hash)
@ -198,7 +244,7 @@ __xfs_dir3_data_check(
if (i >= be32_to_cpu(btp->count)) if (i >= be32_to_cpu(btp->count))
return __this_address; return __this_address;
} }
p += ops->data_entsize(dep->namelen); offset += xfs_dir2_data_entsize(mp, dep->namelen);
} }
/* /*
* Need to have seen all the entries and all the bestfree slots. * Need to have seen all the entries and all the bestfree slots.
@ -354,13 +400,13 @@ xfs_dir3_data_read(
struct xfs_trans *tp, struct xfs_trans *tp,
struct xfs_inode *dp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_dablk_t bno,
xfs_daddr_t mapped_bno, unsigned int flags,
struct xfs_buf **bpp) struct xfs_buf **bpp)
{ {
int err; int err;
err = xfs_da_read_buf(tp, dp, bno, mapped_bno, bpp, err = xfs_da_read_buf(tp, dp, bno, flags, bpp, XFS_DATA_FORK,
XFS_DATA_FORK, &xfs_dir3_data_buf_ops); &xfs_dir3_data_buf_ops);
if (!err && tp && *bpp) if (!err && tp && *bpp)
xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_DATA_BUF); xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_DATA_BUF);
return err; return err;
@ -370,10 +416,10 @@ int
xfs_dir3_data_readahead( xfs_dir3_data_readahead(
struct xfs_inode *dp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_dablk_t bno,
xfs_daddr_t mapped_bno) unsigned int flags)
{ {
return xfs_da_reada_buf(dp, bno, mapped_bno, return xfs_da_reada_buf(dp, bno, flags, XFS_DATA_FORK,
XFS_DATA_FORK, &xfs_dir3_data_reada_buf_ops); &xfs_dir3_data_reada_buf_ops);
} }
/* /*
@ -561,17 +607,16 @@ xfs_dir2_data_freeremove(
* Given a data block, reconstruct its bestfree map. * Given a data block, reconstruct its bestfree map.
*/ */
void void
xfs_dir2_data_freescan_int( xfs_dir2_data_freescan(
struct xfs_da_geometry *geo, struct xfs_mount *mp,
const struct xfs_dir_ops *ops, struct xfs_dir2_data_hdr *hdr,
struct xfs_dir2_data_hdr *hdr, int *loghead)
int *loghead)
{ {
xfs_dir2_data_entry_t *dep; /* active data entry */ struct xfs_da_geometry *geo = mp->m_dir_geo;
xfs_dir2_data_unused_t *dup; /* unused data entry */ struct xfs_dir2_data_free *bf = xfs_dir2_data_bestfree_p(mp, hdr);
struct xfs_dir2_data_free *bf; void *addr = hdr;
char *endp; /* end of block's data */ unsigned int offset = geo->data_entry_offset;
char *p; /* current entry pointer */ unsigned int end;
ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) || ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) ||
hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) || hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) ||
@ -581,79 +626,60 @@ xfs_dir2_data_freescan_int(
/* /*
* Start by clearing the table. * Start by clearing the table.
*/ */
bf = ops->data_bestfree_p(hdr);
memset(bf, 0, sizeof(*bf) * XFS_DIR2_DATA_FD_COUNT); memset(bf, 0, sizeof(*bf) * XFS_DIR2_DATA_FD_COUNT);
*loghead = 1; *loghead = 1;
/*
* Set up pointers. end = xfs_dir3_data_end_offset(geo, addr);
*/ while (offset < end) {
p = (char *)ops->data_entry_p(hdr); struct xfs_dir2_data_unused *dup = addr + offset;
endp = xfs_dir3_data_endp(geo, hdr); struct xfs_dir2_data_entry *dep = addr + offset;
/*
* Loop over the block's entries.
*/
while (p < endp) {
dup = (xfs_dir2_data_unused_t *)p;
/* /*
* If it's a free entry, insert it. * If it's a free entry, insert it.
*/ */
if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
ASSERT((char *)dup - (char *)hdr == ASSERT(offset ==
be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup))); be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)));
xfs_dir2_data_freeinsert(hdr, bf, dup, loghead); xfs_dir2_data_freeinsert(hdr, bf, dup, loghead);
p += be16_to_cpu(dup->length); offset += be16_to_cpu(dup->length);
continue;
} }
/* /*
* For active entries, check their tags and skip them. * For active entries, check their tags and skip them.
*/ */
else { ASSERT(offset ==
dep = (xfs_dir2_data_entry_t *)p; be16_to_cpu(*xfs_dir2_data_entry_tag_p(mp, dep)));
ASSERT((char *)dep - (char *)hdr == offset += xfs_dir2_data_entsize(mp, dep->namelen);
be16_to_cpu(*ops->data_entry_tag_p(dep)));
p += ops->data_entsize(dep->namelen);
}
} }
} }
void
xfs_dir2_data_freescan(
struct xfs_inode *dp,
struct xfs_dir2_data_hdr *hdr,
int *loghead)
{
return xfs_dir2_data_freescan_int(dp->i_mount->m_dir_geo, dp->d_ops,
hdr, loghead);
}
/* /*
* Initialize a data block at the given block number in the directory. * Initialize a data block at the given block number in the directory.
* Give back the buffer for the created block. * Give back the buffer for the created block.
*/ */
int /* error */ int /* error */
xfs_dir3_data_init( xfs_dir3_data_init(
xfs_da_args_t *args, /* directory operation args */ struct xfs_da_args *args, /* directory operation args */
xfs_dir2_db_t blkno, /* logical dir block number */ xfs_dir2_db_t blkno, /* logical dir block number */
struct xfs_buf **bpp) /* output block buffer */ struct xfs_buf **bpp) /* output block buffer */
{ {
struct xfs_buf *bp; /* block buffer */ struct xfs_trans *tp = args->trans;
xfs_dir2_data_hdr_t *hdr; /* data block header */ struct xfs_inode *dp = args->dp;
xfs_inode_t *dp; /* incore directory inode */ struct xfs_mount *mp = dp->i_mount;
xfs_dir2_data_unused_t *dup; /* unused entry pointer */ struct xfs_da_geometry *geo = args->geo;
struct xfs_dir2_data_free *bf; struct xfs_buf *bp;
int error; /* error return value */ struct xfs_dir2_data_hdr *hdr;
int i; /* bestfree index */ struct xfs_dir2_data_unused *dup;
xfs_mount_t *mp; /* filesystem mount point */ struct xfs_dir2_data_free *bf;
xfs_trans_t *tp; /* transaction pointer */ int error;
int t; /* temp */ int i;
dp = args->dp;
mp = dp->i_mount;
tp = args->trans;
/* /*
* Get the buffer set up for the block. * Get the buffer set up for the block.
*/ */
error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(args->geo, blkno), error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(args->geo, blkno),
-1, &bp, XFS_DATA_FORK); &bp, XFS_DATA_FORK);
if (error) if (error)
return error; return error;
bp->b_ops = &xfs_dir3_data_buf_ops; bp->b_ops = &xfs_dir3_data_buf_ops;
@ -675,8 +701,9 @@ xfs_dir3_data_init(
} else } else
hdr->magic = cpu_to_be32(XFS_DIR2_DATA_MAGIC); hdr->magic = cpu_to_be32(XFS_DIR2_DATA_MAGIC);
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(mp, hdr);
bf[0].offset = cpu_to_be16(dp->d_ops->data_entry_offset); bf[0].offset = cpu_to_be16(geo->data_entry_offset);
bf[0].length = cpu_to_be16(geo->blksize - geo->data_entry_offset);
for (i = 1; i < XFS_DIR2_DATA_FD_COUNT; i++) { for (i = 1; i < XFS_DIR2_DATA_FD_COUNT; i++) {
bf[i].length = 0; bf[i].length = 0;
bf[i].offset = 0; bf[i].offset = 0;
@ -685,13 +712,11 @@ xfs_dir3_data_init(
/* /*
* Set up an unused entry for the block's body. * Set up an unused entry for the block's body.
*/ */
dup = dp->d_ops->data_unused_p(hdr); dup = bp->b_addr + geo->data_entry_offset;
dup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG); dup->freetag = cpu_to_be16(XFS_DIR2_DATA_FREE_TAG);
dup->length = bf[0].length;
t = args->geo->blksize - (uint)dp->d_ops->data_entry_offset;
bf[0].length = cpu_to_be16(t);
dup->length = cpu_to_be16(t);
*xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16((char *)dup - (char *)hdr); *xfs_dir2_data_unused_tag_p(dup) = cpu_to_be16((char *)dup - (char *)hdr);
/* /*
* Log it and return it. * Log it and return it.
*/ */
@ -710,6 +735,7 @@ xfs_dir2_data_log_entry(
struct xfs_buf *bp, struct xfs_buf *bp,
xfs_dir2_data_entry_t *dep) /* data entry pointer */ xfs_dir2_data_entry_t *dep) /* data entry pointer */
{ {
struct xfs_mount *mp = bp->b_mount;
struct xfs_dir2_data_hdr *hdr = bp->b_addr; struct xfs_dir2_data_hdr *hdr = bp->b_addr;
ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) || ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) ||
@ -718,7 +744,7 @@ xfs_dir2_data_log_entry(
hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC)); hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC));
xfs_trans_log_buf(args->trans, bp, (uint)((char *)dep - (char *)hdr), xfs_trans_log_buf(args->trans, bp, (uint)((char *)dep - (char *)hdr),
(uint)((char *)(args->dp->d_ops->data_entry_tag_p(dep) + 1) - (uint)((char *)(xfs_dir2_data_entry_tag_p(mp, dep) + 1) -
(char *)hdr - 1)); (char *)hdr - 1));
} }
@ -739,8 +765,7 @@ xfs_dir2_data_log_header(
hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC)); hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC));
#endif #endif
xfs_trans_log_buf(args->trans, bp, 0, xfs_trans_log_buf(args->trans, bp, 0, args->geo->data_entry_offset - 1);
args->dp->d_ops->data_entry_offset - 1);
} }
/* /*
@ -789,11 +814,11 @@ xfs_dir2_data_make_free(
{ {
xfs_dir2_data_hdr_t *hdr; /* data block pointer */ xfs_dir2_data_hdr_t *hdr; /* data block pointer */
xfs_dir2_data_free_t *dfp; /* bestfree pointer */ xfs_dir2_data_free_t *dfp; /* bestfree pointer */
char *endptr; /* end of data area */
int needscan; /* need to regen bestfree */ int needscan; /* need to regen bestfree */
xfs_dir2_data_unused_t *newdup; /* new unused entry */ xfs_dir2_data_unused_t *newdup; /* new unused entry */
xfs_dir2_data_unused_t *postdup; /* unused entry after us */ xfs_dir2_data_unused_t *postdup; /* unused entry after us */
xfs_dir2_data_unused_t *prevdup; /* unused entry before us */ xfs_dir2_data_unused_t *prevdup; /* unused entry before us */
unsigned int end;
struct xfs_dir2_data_free *bf; struct xfs_dir2_data_free *bf;
hdr = bp->b_addr; hdr = bp->b_addr;
@ -801,14 +826,14 @@ xfs_dir2_data_make_free(
/* /*
* Figure out where the end of the data area is. * Figure out where the end of the data area is.
*/ */
endptr = xfs_dir3_data_endp(args->geo, hdr); end = xfs_dir3_data_end_offset(args->geo, hdr);
ASSERT(endptr != NULL); ASSERT(end != 0);
/* /*
* If this isn't the start of the block, then back up to * If this isn't the start of the block, then back up to
* the previous entry and see if it's free. * the previous entry and see if it's free.
*/ */
if (offset > args->dp->d_ops->data_entry_offset) { if (offset > args->geo->data_entry_offset) {
__be16 *tagp; /* tag just before us */ __be16 *tagp; /* tag just before us */
tagp = (__be16 *)((char *)hdr + offset) - 1; tagp = (__be16 *)((char *)hdr + offset) - 1;
@ -821,7 +846,7 @@ xfs_dir2_data_make_free(
* If this isn't the end of the block, see if the entry after * If this isn't the end of the block, see if the entry after
* us is free. * us is free.
*/ */
if ((char *)hdr + offset + len < endptr) { if (offset + len < end) {
postdup = postdup =
(xfs_dir2_data_unused_t *)((char *)hdr + offset + len); (xfs_dir2_data_unused_t *)((char *)hdr + offset + len);
if (be16_to_cpu(postdup->freetag) != XFS_DIR2_DATA_FREE_TAG) if (be16_to_cpu(postdup->freetag) != XFS_DIR2_DATA_FREE_TAG)
@ -834,7 +859,7 @@ xfs_dir2_data_make_free(
* Previous and following entries are both free, * Previous and following entries are both free,
* merge everything into a single free entry. * merge everything into a single free entry.
*/ */
bf = args->dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(args->dp->i_mount, hdr);
if (prevdup && postdup) { if (prevdup && postdup) {
xfs_dir2_data_free_t *dfp2; /* another bestfree pointer */ xfs_dir2_data_free_t *dfp2; /* another bestfree pointer */
@ -1025,7 +1050,7 @@ xfs_dir2_data_use_free(
* Look up the entry in the bestfree table. * Look up the entry in the bestfree table.
*/ */
oldlen = be16_to_cpu(dup->length); oldlen = be16_to_cpu(dup->length);
bf = args->dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(args->dp->i_mount, hdr);
dfp = xfs_dir2_data_freefind(hdr, bf, dup); dfp = xfs_dir2_data_freefind(hdr, bf, dup);
ASSERT(dfp || oldlen <= be16_to_cpu(bf[2].length)); ASSERT(dfp || oldlen <= be16_to_cpu(bf[2].length));
/* /*
@ -1149,19 +1174,22 @@ corrupt:
} }
/* Find the end of the entry data in a data/block format dir block. */ /* Find the end of the entry data in a data/block format dir block. */
void * unsigned int
xfs_dir3_data_endp( xfs_dir3_data_end_offset(
struct xfs_da_geometry *geo, struct xfs_da_geometry *geo,
struct xfs_dir2_data_hdr *hdr) struct xfs_dir2_data_hdr *hdr)
{ {
void *p;
switch (hdr->magic) { switch (hdr->magic) {
case cpu_to_be32(XFS_DIR3_BLOCK_MAGIC): case cpu_to_be32(XFS_DIR3_BLOCK_MAGIC):
case cpu_to_be32(XFS_DIR2_BLOCK_MAGIC): case cpu_to_be32(XFS_DIR2_BLOCK_MAGIC):
return xfs_dir2_block_leaf_p(xfs_dir2_block_tail_p(geo, hdr)); p = xfs_dir2_block_leaf_p(xfs_dir2_block_tail_p(geo, hdr));
return p - (void *)hdr;
case cpu_to_be32(XFS_DIR3_DATA_MAGIC): case cpu_to_be32(XFS_DIR3_DATA_MAGIC):
case cpu_to_be32(XFS_DIR2_DATA_MAGIC): case cpu_to_be32(XFS_DIR2_DATA_MAGIC):
return (char *)hdr + geo->blksize; return geo->blksize;
default: default:
return NULL; return 0;
} }
} }

View File

@ -24,12 +24,73 @@
* Local function declarations. * Local function declarations.
*/ */
static int xfs_dir2_leaf_lookup_int(xfs_da_args_t *args, struct xfs_buf **lbpp, static int xfs_dir2_leaf_lookup_int(xfs_da_args_t *args, struct xfs_buf **lbpp,
int *indexp, struct xfs_buf **dbpp); int *indexp, struct xfs_buf **dbpp,
struct xfs_dir3_icleaf_hdr *leafhdr);
static void xfs_dir3_leaf_log_bests(struct xfs_da_args *args, static void xfs_dir3_leaf_log_bests(struct xfs_da_args *args,
struct xfs_buf *bp, int first, int last); struct xfs_buf *bp, int first, int last);
static void xfs_dir3_leaf_log_tail(struct xfs_da_args *args, static void xfs_dir3_leaf_log_tail(struct xfs_da_args *args,
struct xfs_buf *bp); struct xfs_buf *bp);
void
xfs_dir2_leaf_hdr_from_disk(
struct xfs_mount *mp,
struct xfs_dir3_icleaf_hdr *to,
struct xfs_dir2_leaf *from)
{
if (xfs_sb_version_hascrc(&mp->m_sb)) {
struct xfs_dir3_leaf *from3 = (struct xfs_dir3_leaf *)from;
to->forw = be32_to_cpu(from3->hdr.info.hdr.forw);
to->back = be32_to_cpu(from3->hdr.info.hdr.back);
to->magic = be16_to_cpu(from3->hdr.info.hdr.magic);
to->count = be16_to_cpu(from3->hdr.count);
to->stale = be16_to_cpu(from3->hdr.stale);
to->ents = from3->__ents;
ASSERT(to->magic == XFS_DIR3_LEAF1_MAGIC ||
to->magic == XFS_DIR3_LEAFN_MAGIC);
} else {
to->forw = be32_to_cpu(from->hdr.info.forw);
to->back = be32_to_cpu(from->hdr.info.back);
to->magic = be16_to_cpu(from->hdr.info.magic);
to->count = be16_to_cpu(from->hdr.count);
to->stale = be16_to_cpu(from->hdr.stale);
to->ents = from->__ents;
ASSERT(to->magic == XFS_DIR2_LEAF1_MAGIC ||
to->magic == XFS_DIR2_LEAFN_MAGIC);
}
}
void
xfs_dir2_leaf_hdr_to_disk(
struct xfs_mount *mp,
struct xfs_dir2_leaf *to,
struct xfs_dir3_icleaf_hdr *from)
{
if (xfs_sb_version_hascrc(&mp->m_sb)) {
struct xfs_dir3_leaf *to3 = (struct xfs_dir3_leaf *)to;
ASSERT(from->magic == XFS_DIR3_LEAF1_MAGIC ||
from->magic == XFS_DIR3_LEAFN_MAGIC);
to3->hdr.info.hdr.forw = cpu_to_be32(from->forw);
to3->hdr.info.hdr.back = cpu_to_be32(from->back);
to3->hdr.info.hdr.magic = cpu_to_be16(from->magic);
to3->hdr.count = cpu_to_be16(from->count);
to3->hdr.stale = cpu_to_be16(from->stale);
} else {
ASSERT(from->magic == XFS_DIR2_LEAF1_MAGIC ||
from->magic == XFS_DIR2_LEAFN_MAGIC);
to->hdr.info.forw = cpu_to_be32(from->forw);
to->hdr.info.back = cpu_to_be32(from->back);
to->hdr.info.magic = cpu_to_be16(from->magic);
to->hdr.count = cpu_to_be16(from->count);
to->hdr.stale = cpu_to_be16(from->stale);
}
}
/* /*
* Check the internal consistency of a leaf1 block. * Check the internal consistency of a leaf1 block.
* Pop an assert if something is wrong. * Pop an assert if something is wrong.
@ -43,7 +104,7 @@ xfs_dir3_leaf1_check(
struct xfs_dir2_leaf *leaf = bp->b_addr; struct xfs_dir2_leaf *leaf = bp->b_addr;
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf); xfs_dir2_leaf_hdr_from_disk(dp->i_mount, &leafhdr, leaf);
if (leafhdr.magic == XFS_DIR3_LEAF1_MAGIC) { if (leafhdr.magic == XFS_DIR3_LEAF1_MAGIC) {
struct xfs_dir3_leaf_hdr *leaf3 = bp->b_addr; struct xfs_dir3_leaf_hdr *leaf3 = bp->b_addr;
@ -52,7 +113,7 @@ xfs_dir3_leaf1_check(
} else if (leafhdr.magic != XFS_DIR2_LEAF1_MAGIC) } else if (leafhdr.magic != XFS_DIR2_LEAF1_MAGIC)
return __this_address; return __this_address;
return xfs_dir3_leaf_check_int(dp->i_mount, dp, &leafhdr, leaf); return xfs_dir3_leaf_check_int(dp->i_mount, &leafhdr, leaf);
} }
static inline void static inline void
@ -76,31 +137,15 @@ xfs_dir3_leaf_check(
xfs_failaddr_t xfs_failaddr_t
xfs_dir3_leaf_check_int( xfs_dir3_leaf_check_int(
struct xfs_mount *mp, struct xfs_mount *mp,
struct xfs_inode *dp, struct xfs_dir3_icleaf_hdr *hdr,
struct xfs_dir3_icleaf_hdr *hdr, struct xfs_dir2_leaf *leaf)
struct xfs_dir2_leaf *leaf)
{ {
struct xfs_dir2_leaf_entry *ents; struct xfs_da_geometry *geo = mp->m_dir_geo;
xfs_dir2_leaf_tail_t *ltp; xfs_dir2_leaf_tail_t *ltp;
int stale; int stale;
int i; int i;
const struct xfs_dir_ops *ops;
struct xfs_dir3_icleaf_hdr leafhdr;
struct xfs_da_geometry *geo = mp->m_dir_geo;
/*
* we can be passed a null dp here from a verifier, so we need to go the
* hard way to get them.
*/
ops = xfs_dir_get_ops(mp, dp);
if (!hdr) {
ops->leaf_hdr_from_disk(&leafhdr, leaf);
hdr = &leafhdr;
}
ents = ops->leaf_ents_p(leaf);
ltp = xfs_dir2_leaf_tail_p(geo, leaf); ltp = xfs_dir2_leaf_tail_p(geo, leaf);
/* /*
@ -108,23 +153,23 @@ xfs_dir3_leaf_check_int(
* Should factor in the size of the bests table as well. * Should factor in the size of the bests table as well.
* We can deduce a value for that from di_size. * We can deduce a value for that from di_size.
*/ */
if (hdr->count > ops->leaf_max_ents(geo)) if (hdr->count > geo->leaf_max_ents)
return __this_address; return __this_address;
/* Leaves and bests don't overlap in leaf format. */ /* Leaves and bests don't overlap in leaf format. */
if ((hdr->magic == XFS_DIR2_LEAF1_MAGIC || if ((hdr->magic == XFS_DIR2_LEAF1_MAGIC ||
hdr->magic == XFS_DIR3_LEAF1_MAGIC) && hdr->magic == XFS_DIR3_LEAF1_MAGIC) &&
(char *)&ents[hdr->count] > (char *)xfs_dir2_leaf_bests_p(ltp)) (char *)&hdr->ents[hdr->count] > (char *)xfs_dir2_leaf_bests_p(ltp))
return __this_address; return __this_address;
/* Check hash value order, count stale entries. */ /* Check hash value order, count stale entries. */
for (i = stale = 0; i < hdr->count; i++) { for (i = stale = 0; i < hdr->count; i++) {
if (i + 1 < hdr->count) { if (i + 1 < hdr->count) {
if (be32_to_cpu(ents[i].hashval) > if (be32_to_cpu(hdr->ents[i].hashval) >
be32_to_cpu(ents[i + 1].hashval)) be32_to_cpu(hdr->ents[i + 1].hashval))
return __this_address; return __this_address;
} }
if (ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR)) if (hdr->ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
stale++; stale++;
} }
if (hdr->stale != stale) if (hdr->stale != stale)
@ -139,17 +184,18 @@ xfs_dir3_leaf_check_int(
*/ */
static xfs_failaddr_t static xfs_failaddr_t
xfs_dir3_leaf_verify( xfs_dir3_leaf_verify(
struct xfs_buf *bp) struct xfs_buf *bp)
{ {
struct xfs_mount *mp = bp->b_mount; struct xfs_mount *mp = bp->b_mount;
struct xfs_dir2_leaf *leaf = bp->b_addr; struct xfs_dir3_icleaf_hdr leafhdr;
xfs_failaddr_t fa; xfs_failaddr_t fa;
fa = xfs_da3_blkinfo_verify(bp, bp->b_addr); fa = xfs_da3_blkinfo_verify(bp, bp->b_addr);
if (fa) if (fa)
return fa; return fa;
return xfs_dir3_leaf_check_int(mp, NULL, NULL, leaf); xfs_dir2_leaf_hdr_from_disk(mp, &leafhdr, bp->b_addr);
return xfs_dir3_leaf_check_int(mp, &leafhdr, bp->b_addr);
} }
static void static void
@ -216,13 +262,12 @@ xfs_dir3_leaf_read(
struct xfs_trans *tp, struct xfs_trans *tp,
struct xfs_inode *dp, struct xfs_inode *dp,
xfs_dablk_t fbno, xfs_dablk_t fbno,
xfs_daddr_t mappedbno,
struct xfs_buf **bpp) struct xfs_buf **bpp)
{ {
int err; int err;
err = xfs_da_read_buf(tp, dp, fbno, mappedbno, bpp, err = xfs_da_read_buf(tp, dp, fbno, 0, bpp, XFS_DATA_FORK,
XFS_DATA_FORK, &xfs_dir3_leaf1_buf_ops); &xfs_dir3_leaf1_buf_ops);
if (!err && tp && *bpp) if (!err && tp && *bpp)
xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_LEAF1_BUF); xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_LEAF1_BUF);
return err; return err;
@ -233,13 +278,12 @@ xfs_dir3_leafn_read(
struct xfs_trans *tp, struct xfs_trans *tp,
struct xfs_inode *dp, struct xfs_inode *dp,
xfs_dablk_t fbno, xfs_dablk_t fbno,
xfs_daddr_t mappedbno,
struct xfs_buf **bpp) struct xfs_buf **bpp)
{ {
int err; int err;
err = xfs_da_read_buf(tp, dp, fbno, mappedbno, bpp, err = xfs_da_read_buf(tp, dp, fbno, 0, bpp, XFS_DATA_FORK,
XFS_DATA_FORK, &xfs_dir3_leafn_buf_ops); &xfs_dir3_leafn_buf_ops);
if (!err && tp && *bpp) if (!err && tp && *bpp)
xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_LEAFN_BUF); xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_LEAFN_BUF);
return err; return err;
@ -311,7 +355,7 @@ xfs_dir3_leaf_get_buf(
bno < xfs_dir2_byte_to_db(args->geo, XFS_DIR2_FREE_OFFSET)); bno < xfs_dir2_byte_to_db(args->geo, XFS_DIR2_FREE_OFFSET));
error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(args->geo, bno), error = xfs_da_get_buf(tp, dp, xfs_dir2_db_to_da(args->geo, bno),
-1, &bp, XFS_DATA_FORK); &bp, XFS_DATA_FORK);
if (error) if (error)
return error; return error;
@ -346,7 +390,6 @@ xfs_dir2_block_to_leaf(
int needscan; /* need to rescan bestfree */ int needscan; /* need to rescan bestfree */
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
struct xfs_dir2_data_free *bf; struct xfs_dir2_data_free *bf;
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
trace_xfs_dir2_block_to_leaf(args); trace_xfs_dir2_block_to_leaf(args);
@ -375,24 +418,24 @@ xfs_dir2_block_to_leaf(
xfs_dir3_data_check(dp, dbp); xfs_dir3_data_check(dp, dbp);
btp = xfs_dir2_block_tail_p(args->geo, hdr); btp = xfs_dir2_block_tail_p(args->geo, hdr);
blp = xfs_dir2_block_leaf_p(btp); blp = xfs_dir2_block_leaf_p(btp);
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
ents = dp->d_ops->leaf_ents_p(leaf);
/* /*
* Set the counts in the leaf header. * Set the counts in the leaf header.
*/ */
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf); xfs_dir2_leaf_hdr_from_disk(dp->i_mount, &leafhdr, leaf);
leafhdr.count = be32_to_cpu(btp->count); leafhdr.count = be32_to_cpu(btp->count);
leafhdr.stale = be32_to_cpu(btp->stale); leafhdr.stale = be32_to_cpu(btp->stale);
dp->d_ops->leaf_hdr_to_disk(leaf, &leafhdr); xfs_dir2_leaf_hdr_to_disk(dp->i_mount, leaf, &leafhdr);
xfs_dir3_leaf_log_header(args, lbp); xfs_dir3_leaf_log_header(args, lbp);
/* /*
* Could compact these but I think we always do the conversion * Could compact these but I think we always do the conversion
* after squeezing out stale entries. * after squeezing out stale entries.
*/ */
memcpy(ents, blp, be32_to_cpu(btp->count) * sizeof(xfs_dir2_leaf_entry_t)); memcpy(leafhdr.ents, blp,
xfs_dir3_leaf_log_ents(args, lbp, 0, leafhdr.count - 1); be32_to_cpu(btp->count) * sizeof(struct xfs_dir2_leaf_entry));
xfs_dir3_leaf_log_ents(args, &leafhdr, lbp, 0, leafhdr.count - 1);
needscan = 0; needscan = 0;
needlog = 1; needlog = 1;
/* /*
@ -415,7 +458,7 @@ xfs_dir2_block_to_leaf(
hdr->magic = cpu_to_be32(XFS_DIR3_DATA_MAGIC); hdr->magic = cpu_to_be32(XFS_DIR3_DATA_MAGIC);
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
/* /*
* Set up leaf tail and bests table. * Set up leaf tail and bests table.
*/ */
@ -594,7 +637,7 @@ xfs_dir2_leaf_addname(
trace_xfs_dir2_leaf_addname(args); trace_xfs_dir2_leaf_addname(args);
error = xfs_dir3_leaf_read(tp, dp, args->geo->leafblk, -1, &lbp); error = xfs_dir3_leaf_read(tp, dp, args->geo->leafblk, &lbp);
if (error) if (error)
return error; return error;
@ -607,10 +650,10 @@ xfs_dir2_leaf_addname(
index = xfs_dir2_leaf_search_hash(args, lbp); index = xfs_dir2_leaf_search_hash(args, lbp);
leaf = lbp->b_addr; leaf = lbp->b_addr;
ltp = xfs_dir2_leaf_tail_p(args->geo, leaf); ltp = xfs_dir2_leaf_tail_p(args->geo, leaf);
ents = dp->d_ops->leaf_ents_p(leaf); xfs_dir2_leaf_hdr_from_disk(dp->i_mount, &leafhdr, leaf);
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf); ents = leafhdr.ents;
bestsp = xfs_dir2_leaf_bests_p(ltp); bestsp = xfs_dir2_leaf_bests_p(ltp);
length = dp->d_ops->data_entsize(args->namelen); length = xfs_dir2_data_entsize(dp->i_mount, args->namelen);
/* /*
* See if there are any entries with the same hash value * See if there are any entries with the same hash value
@ -773,7 +816,7 @@ xfs_dir2_leaf_addname(
else else
xfs_dir3_leaf_log_bests(args, lbp, use_block, use_block); xfs_dir3_leaf_log_bests(args, lbp, use_block, use_block);
hdr = dbp->b_addr; hdr = dbp->b_addr;
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
bestsp[use_block] = bf[0].length; bestsp[use_block] = bf[0].length;
grown = 1; grown = 1;
} else { } else {
@ -783,13 +826,13 @@ xfs_dir2_leaf_addname(
*/ */
error = xfs_dir3_data_read(tp, dp, error = xfs_dir3_data_read(tp, dp,
xfs_dir2_db_to_da(args->geo, use_block), xfs_dir2_db_to_da(args->geo, use_block),
-1, &dbp); 0, &dbp);
if (error) { if (error) {
xfs_trans_brelse(tp, lbp); xfs_trans_brelse(tp, lbp);
return error; return error;
} }
hdr = dbp->b_addr; hdr = dbp->b_addr;
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
grown = 0; grown = 0;
} }
/* /*
@ -815,14 +858,14 @@ xfs_dir2_leaf_addname(
dep->inumber = cpu_to_be64(args->inumber); dep->inumber = cpu_to_be64(args->inumber);
dep->namelen = args->namelen; dep->namelen = args->namelen;
memcpy(dep->name, args->name, dep->namelen); memcpy(dep->name, args->name, dep->namelen);
dp->d_ops->data_put_ftype(dep, args->filetype); xfs_dir2_data_put_ftype(dp->i_mount, dep, args->filetype);
tagp = dp->d_ops->data_entry_tag_p(dep); tagp = xfs_dir2_data_entry_tag_p(dp->i_mount, dep);
*tagp = cpu_to_be16((char *)dep - (char *)hdr); *tagp = cpu_to_be16((char *)dep - (char *)hdr);
/* /*
* Need to scan fix up the bestfree table. * Need to scan fix up the bestfree table.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
/* /*
* Need to log the data block's header. * Need to log the data block's header.
*/ */
@ -852,9 +895,9 @@ xfs_dir2_leaf_addname(
/* /*
* Log the leaf fields and give up the buffers. * Log the leaf fields and give up the buffers.
*/ */
dp->d_ops->leaf_hdr_to_disk(leaf, &leafhdr); xfs_dir2_leaf_hdr_to_disk(dp->i_mount, leaf, &leafhdr);
xfs_dir3_leaf_log_header(args, lbp); xfs_dir3_leaf_log_header(args, lbp);
xfs_dir3_leaf_log_ents(args, lbp, lfloglow, lfloghigh); xfs_dir3_leaf_log_ents(args, &leafhdr, lbp, lfloglow, lfloghigh);
xfs_dir3_leaf_check(dp, lbp); xfs_dir3_leaf_check(dp, lbp);
xfs_dir3_data_check(dp, dbp); xfs_dir3_data_check(dp, dbp);
return 0; return 0;
@ -874,7 +917,6 @@ xfs_dir3_leaf_compact(
xfs_dir2_leaf_t *leaf; /* leaf structure */ xfs_dir2_leaf_t *leaf; /* leaf structure */
int loglow; /* first leaf entry to log */ int loglow; /* first leaf entry to log */
int to; /* target leaf index */ int to; /* target leaf index */
struct xfs_dir2_leaf_entry *ents;
struct xfs_inode *dp = args->dp; struct xfs_inode *dp = args->dp;
leaf = bp->b_addr; leaf = bp->b_addr;
@ -884,9 +926,9 @@ xfs_dir3_leaf_compact(
/* /*
* Compress out the stale entries in place. * Compress out the stale entries in place.
*/ */
ents = dp->d_ops->leaf_ents_p(leaf);
for (from = to = 0, loglow = -1; from < leafhdr->count; from++) { for (from = to = 0, loglow = -1; from < leafhdr->count; from++) {
if (ents[from].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR)) if (leafhdr->ents[from].address ==
cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
continue; continue;
/* /*
* Only actually copy the entries that are different. * Only actually copy the entries that are different.
@ -894,7 +936,7 @@ xfs_dir3_leaf_compact(
if (from > to) { if (from > to) {
if (loglow == -1) if (loglow == -1)
loglow = to; loglow = to;
ents[to] = ents[from]; leafhdr->ents[to] = leafhdr->ents[from];
} }
to++; to++;
} }
@ -905,10 +947,10 @@ xfs_dir3_leaf_compact(
leafhdr->count -= leafhdr->stale; leafhdr->count -= leafhdr->stale;
leafhdr->stale = 0; leafhdr->stale = 0;
dp->d_ops->leaf_hdr_to_disk(leaf, leafhdr); xfs_dir2_leaf_hdr_to_disk(dp->i_mount, leaf, leafhdr);
xfs_dir3_leaf_log_header(args, bp); xfs_dir3_leaf_log_header(args, bp);
if (loglow != -1) if (loglow != -1)
xfs_dir3_leaf_log_ents(args, bp, loglow, to - 1); xfs_dir3_leaf_log_ents(args, leafhdr, bp, loglow, to - 1);
} }
/* /*
@ -1037,6 +1079,7 @@ xfs_dir3_leaf_log_bests(
void void
xfs_dir3_leaf_log_ents( xfs_dir3_leaf_log_ents(
struct xfs_da_args *args, struct xfs_da_args *args,
struct xfs_dir3_icleaf_hdr *hdr,
struct xfs_buf *bp, struct xfs_buf *bp,
int first, int first,
int last) int last)
@ -1044,16 +1087,14 @@ xfs_dir3_leaf_log_ents(
xfs_dir2_leaf_entry_t *firstlep; /* pointer to first entry */ xfs_dir2_leaf_entry_t *firstlep; /* pointer to first entry */
xfs_dir2_leaf_entry_t *lastlep; /* pointer to last entry */ xfs_dir2_leaf_entry_t *lastlep; /* pointer to last entry */
struct xfs_dir2_leaf *leaf = bp->b_addr; struct xfs_dir2_leaf *leaf = bp->b_addr;
struct xfs_dir2_leaf_entry *ents;
ASSERT(leaf->hdr.info.magic == cpu_to_be16(XFS_DIR2_LEAF1_MAGIC) || ASSERT(leaf->hdr.info.magic == cpu_to_be16(XFS_DIR2_LEAF1_MAGIC) ||
leaf->hdr.info.magic == cpu_to_be16(XFS_DIR3_LEAF1_MAGIC) || leaf->hdr.info.magic == cpu_to_be16(XFS_DIR3_LEAF1_MAGIC) ||
leaf->hdr.info.magic == cpu_to_be16(XFS_DIR2_LEAFN_MAGIC) || leaf->hdr.info.magic == cpu_to_be16(XFS_DIR2_LEAFN_MAGIC) ||
leaf->hdr.info.magic == cpu_to_be16(XFS_DIR3_LEAFN_MAGIC)); leaf->hdr.info.magic == cpu_to_be16(XFS_DIR3_LEAFN_MAGIC));
ents = args->dp->d_ops->leaf_ents_p(leaf); firstlep = &hdr->ents[first];
firstlep = &ents[first]; lastlep = &hdr->ents[last];
lastlep = &ents[last];
xfs_trans_log_buf(args->trans, bp, xfs_trans_log_buf(args->trans, bp,
(uint)((char *)firstlep - (char *)leaf), (uint)((char *)firstlep - (char *)leaf),
(uint)((char *)lastlep - (char *)leaf + sizeof(*lastlep) - 1)); (uint)((char *)lastlep - (char *)leaf + sizeof(*lastlep) - 1));
@ -1076,7 +1117,7 @@ xfs_dir3_leaf_log_header(
xfs_trans_log_buf(args->trans, bp, xfs_trans_log_buf(args->trans, bp,
(uint)((char *)&leaf->hdr - (char *)leaf), (uint)((char *)&leaf->hdr - (char *)leaf),
args->dp->d_ops->leaf_hdr_size - 1); args->geo->leaf_hdr_size - 1);
} }
/* /*
@ -1115,28 +1156,27 @@ xfs_dir2_leaf_lookup(
int error; /* error return code */ int error; /* error return code */
int index; /* found entry index */ int index; /* found entry index */
struct xfs_buf *lbp; /* leaf buffer */ struct xfs_buf *lbp; /* leaf buffer */
xfs_dir2_leaf_t *leaf; /* leaf structure */
xfs_dir2_leaf_entry_t *lep; /* leaf entry */ xfs_dir2_leaf_entry_t *lep; /* leaf entry */
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
struct xfs_dir2_leaf_entry *ents; struct xfs_dir3_icleaf_hdr leafhdr;
trace_xfs_dir2_leaf_lookup(args); trace_xfs_dir2_leaf_lookup(args);
/* /*
* Look up name in the leaf block, returning both buffers and index. * Look up name in the leaf block, returning both buffers and index.
*/ */
if ((error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp))) { error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp, &leafhdr);
if (error)
return error; return error;
}
tp = args->trans; tp = args->trans;
dp = args->dp; dp = args->dp;
xfs_dir3_leaf_check(dp, lbp); xfs_dir3_leaf_check(dp, lbp);
leaf = lbp->b_addr;
ents = dp->d_ops->leaf_ents_p(leaf);
/* /*
* Get to the leaf entry and contained data entry address. * Get to the leaf entry and contained data entry address.
*/ */
lep = &ents[index]; lep = &leafhdr.ents[index];
/* /*
* Point to the data entry. * Point to the data entry.
@ -1148,7 +1188,7 @@ xfs_dir2_leaf_lookup(
* Return the found inode number & CI name if appropriate * Return the found inode number & CI name if appropriate
*/ */
args->inumber = be64_to_cpu(dep->inumber); args->inumber = be64_to_cpu(dep->inumber);
args->filetype = dp->d_ops->data_get_ftype(dep); args->filetype = xfs_dir2_data_get_ftype(dp->i_mount, dep);
error = xfs_dir_cilookup_result(args, dep->name, dep->namelen); error = xfs_dir_cilookup_result(args, dep->name, dep->namelen);
xfs_trans_brelse(tp, dbp); xfs_trans_brelse(tp, dbp);
xfs_trans_brelse(tp, lbp); xfs_trans_brelse(tp, lbp);
@ -1166,7 +1206,8 @@ xfs_dir2_leaf_lookup_int(
xfs_da_args_t *args, /* operation arguments */ xfs_da_args_t *args, /* operation arguments */
struct xfs_buf **lbpp, /* out: leaf buffer */ struct xfs_buf **lbpp, /* out: leaf buffer */
int *indexp, /* out: index in leaf block */ int *indexp, /* out: index in leaf block */
struct xfs_buf **dbpp) /* out: data buffer */ struct xfs_buf **dbpp, /* out: data buffer */
struct xfs_dir3_icleaf_hdr *leafhdr)
{ {
xfs_dir2_db_t curdb = -1; /* current data block number */ xfs_dir2_db_t curdb = -1; /* current data block number */
struct xfs_buf *dbp = NULL; /* data buffer */ struct xfs_buf *dbp = NULL; /* data buffer */
@ -1182,22 +1223,19 @@ xfs_dir2_leaf_lookup_int(
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
xfs_dir2_db_t cidb = -1; /* case match data block no. */ xfs_dir2_db_t cidb = -1; /* case match data block no. */
enum xfs_dacmp cmp; /* name compare result */ enum xfs_dacmp cmp; /* name compare result */
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir3_icleaf_hdr leafhdr;
dp = args->dp; dp = args->dp;
tp = args->trans; tp = args->trans;
mp = dp->i_mount; mp = dp->i_mount;
error = xfs_dir3_leaf_read(tp, dp, args->geo->leafblk, -1, &lbp); error = xfs_dir3_leaf_read(tp, dp, args->geo->leafblk, &lbp);
if (error) if (error)
return error; return error;
*lbpp = lbp; *lbpp = lbp;
leaf = lbp->b_addr; leaf = lbp->b_addr;
xfs_dir3_leaf_check(dp, lbp); xfs_dir3_leaf_check(dp, lbp);
ents = dp->d_ops->leaf_ents_p(leaf); xfs_dir2_leaf_hdr_from_disk(mp, leafhdr, leaf);
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf);
/* /*
* Look for the first leaf entry with our hash value. * Look for the first leaf entry with our hash value.
@ -1207,8 +1245,9 @@ xfs_dir2_leaf_lookup_int(
* Loop over all the entries with the right hash value * Loop over all the entries with the right hash value
* looking to match the name. * looking to match the name.
*/ */
for (lep = &ents[index]; for (lep = &leafhdr->ents[index];
index < leafhdr.count && be32_to_cpu(lep->hashval) == args->hashval; index < leafhdr->count &&
be32_to_cpu(lep->hashval) == args->hashval;
lep++, index++) { lep++, index++) {
/* /*
* Skip over stale leaf entries. * Skip over stale leaf entries.
@ -1229,7 +1268,7 @@ xfs_dir2_leaf_lookup_int(
xfs_trans_brelse(tp, dbp); xfs_trans_brelse(tp, dbp);
error = xfs_dir3_data_read(tp, dp, error = xfs_dir3_data_read(tp, dp,
xfs_dir2_db_to_da(args->geo, newdb), xfs_dir2_db_to_da(args->geo, newdb),
-1, &dbp); 0, &dbp);
if (error) { if (error) {
xfs_trans_brelse(tp, lbp); xfs_trans_brelse(tp, lbp);
return error; return error;
@ -1247,7 +1286,7 @@ xfs_dir2_leaf_lookup_int(
* and buffer. If it's the first case-insensitive match, store * and buffer. If it's the first case-insensitive match, store
* the index and buffer and continue looking for an exact match. * the index and buffer and continue looking for an exact match.
*/ */
cmp = mp->m_dirnameops->compname(args, dep->name, dep->namelen); cmp = xfs_dir2_compname(args, dep->name, dep->namelen);
if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) { if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) {
args->cmpresult = cmp; args->cmpresult = cmp;
*indexp = index; *indexp = index;
@ -1271,7 +1310,7 @@ xfs_dir2_leaf_lookup_int(
xfs_trans_brelse(tp, dbp); xfs_trans_brelse(tp, dbp);
error = xfs_dir3_data_read(tp, dp, error = xfs_dir3_data_read(tp, dp,
xfs_dir2_db_to_da(args->geo, cidb), xfs_dir2_db_to_da(args->geo, cidb),
-1, &dbp); 0, &dbp);
if (error) { if (error) {
xfs_trans_brelse(tp, lbp); xfs_trans_brelse(tp, lbp);
return error; return error;
@ -1297,6 +1336,7 @@ int /* error */
xfs_dir2_leaf_removename( xfs_dir2_leaf_removename(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
struct xfs_da_geometry *geo = args->geo;
__be16 *bestsp; /* leaf block best freespace */ __be16 *bestsp; /* leaf block best freespace */
xfs_dir2_data_hdr_t *hdr; /* data block header */ xfs_dir2_data_hdr_t *hdr; /* data block header */
xfs_dir2_db_t db; /* data block number */ xfs_dir2_db_t db; /* data block number */
@ -1314,7 +1354,6 @@ xfs_dir2_leaf_removename(
int needscan; /* need to rescan data frees */ int needscan; /* need to rescan data frees */
xfs_dir2_data_off_t oldbest; /* old value of best free */ xfs_dir2_data_off_t oldbest; /* old value of best free */
struct xfs_dir2_data_free *bf; /* bestfree table */ struct xfs_dir2_data_free *bf; /* bestfree table */
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
trace_xfs_dir2_leaf_removename(args); trace_xfs_dir2_leaf_removename(args);
@ -1322,51 +1361,54 @@ xfs_dir2_leaf_removename(
/* /*
* Lookup the leaf entry, get the leaf and data blocks read in. * Lookup the leaf entry, get the leaf and data blocks read in.
*/ */
if ((error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp))) { error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp, &leafhdr);
if (error)
return error; return error;
}
dp = args->dp; dp = args->dp;
leaf = lbp->b_addr; leaf = lbp->b_addr;
hdr = dbp->b_addr; hdr = dbp->b_addr;
xfs_dir3_data_check(dp, dbp); xfs_dir3_data_check(dp, dbp);
bf = dp->d_ops->data_bestfree_p(hdr); bf = xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf);
ents = dp->d_ops->leaf_ents_p(leaf);
/* /*
* Point to the leaf entry, use that to point to the data entry. * Point to the leaf entry, use that to point to the data entry.
*/ */
lep = &ents[index]; lep = &leafhdr.ents[index];
db = xfs_dir2_dataptr_to_db(args->geo, be32_to_cpu(lep->address)); db = xfs_dir2_dataptr_to_db(geo, be32_to_cpu(lep->address));
dep = (xfs_dir2_data_entry_t *)((char *)hdr + dep = (xfs_dir2_data_entry_t *)((char *)hdr +
xfs_dir2_dataptr_to_off(args->geo, be32_to_cpu(lep->address))); xfs_dir2_dataptr_to_off(geo, be32_to_cpu(lep->address)));
needscan = needlog = 0; needscan = needlog = 0;
oldbest = be16_to_cpu(bf[0].length); oldbest = be16_to_cpu(bf[0].length);
ltp = xfs_dir2_leaf_tail_p(args->geo, leaf); ltp = xfs_dir2_leaf_tail_p(geo, leaf);
bestsp = xfs_dir2_leaf_bests_p(ltp); bestsp = xfs_dir2_leaf_bests_p(ltp);
if (be16_to_cpu(bestsp[db]) != oldbest) if (be16_to_cpu(bestsp[db]) != oldbest) {
xfs_buf_corruption_error(lbp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
/* /*
* Mark the former data entry unused. * Mark the former data entry unused.
*/ */
xfs_dir2_data_make_free(args, dbp, xfs_dir2_data_make_free(args, dbp,
(xfs_dir2_data_aoff_t)((char *)dep - (char *)hdr), (xfs_dir2_data_aoff_t)((char *)dep - (char *)hdr),
dp->d_ops->data_entsize(dep->namelen), &needlog, &needscan); xfs_dir2_data_entsize(dp->i_mount, dep->namelen), &needlog,
&needscan);
/* /*
* We just mark the leaf entry stale by putting a null in it. * We just mark the leaf entry stale by putting a null in it.
*/ */
leafhdr.stale++; leafhdr.stale++;
dp->d_ops->leaf_hdr_to_disk(leaf, &leafhdr); xfs_dir2_leaf_hdr_to_disk(dp->i_mount, leaf, &leafhdr);
xfs_dir3_leaf_log_header(args, lbp); xfs_dir3_leaf_log_header(args, lbp);
lep->address = cpu_to_be32(XFS_DIR2_NULL_DATAPTR); lep->address = cpu_to_be32(XFS_DIR2_NULL_DATAPTR);
xfs_dir3_leaf_log_ents(args, lbp, index, index); xfs_dir3_leaf_log_ents(args, &leafhdr, lbp, index, index);
/* /*
* Scan the freespace in the data block again if necessary, * Scan the freespace in the data block again if necessary,
* log the data block header if necessary. * log the data block header if necessary.
*/ */
if (needscan) if (needscan)
xfs_dir2_data_freescan(dp, hdr, &needlog); xfs_dir2_data_freescan(dp->i_mount, hdr, &needlog);
if (needlog) if (needlog)
xfs_dir2_data_log_header(args, dbp); xfs_dir2_data_log_header(args, dbp);
/* /*
@ -1382,8 +1424,8 @@ xfs_dir2_leaf_removename(
* If the data block is now empty then get rid of the data block. * If the data block is now empty then get rid of the data block.
*/ */
if (be16_to_cpu(bf[0].length) == if (be16_to_cpu(bf[0].length) ==
args->geo->blksize - dp->d_ops->data_entry_offset) { geo->blksize - geo->data_entry_offset) {
ASSERT(db != args->geo->datablk); ASSERT(db != geo->datablk);
if ((error = xfs_dir2_shrink_inode(args, db, dbp))) { if ((error = xfs_dir2_shrink_inode(args, db, dbp))) {
/* /*
* Nope, can't get rid of it because it caused * Nope, can't get rid of it because it caused
@ -1425,7 +1467,7 @@ xfs_dir2_leaf_removename(
/* /*
* If the data block was not the first one, drop it. * If the data block was not the first one, drop it.
*/ */
else if (db != args->geo->datablk) else if (db != geo->datablk)
dbp = NULL; dbp = NULL;
xfs_dir3_leaf_check(dp, lbp); xfs_dir3_leaf_check(dp, lbp);
@ -1448,26 +1490,24 @@ xfs_dir2_leaf_replace(
int error; /* error return code */ int error; /* error return code */
int index; /* index of leaf entry */ int index; /* index of leaf entry */
struct xfs_buf *lbp; /* leaf buffer */ struct xfs_buf *lbp; /* leaf buffer */
xfs_dir2_leaf_t *leaf; /* leaf structure */
xfs_dir2_leaf_entry_t *lep; /* leaf entry */ xfs_dir2_leaf_entry_t *lep; /* leaf entry */
xfs_trans_t *tp; /* transaction pointer */ xfs_trans_t *tp; /* transaction pointer */
struct xfs_dir2_leaf_entry *ents; struct xfs_dir3_icleaf_hdr leafhdr;
trace_xfs_dir2_leaf_replace(args); trace_xfs_dir2_leaf_replace(args);
/* /*
* Look up the entry. * Look up the entry.
*/ */
if ((error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp))) { error = xfs_dir2_leaf_lookup_int(args, &lbp, &index, &dbp, &leafhdr);
if (error)
return error; return error;
}
dp = args->dp; dp = args->dp;
leaf = lbp->b_addr;
ents = dp->d_ops->leaf_ents_p(leaf);
/* /*
* Point to the leaf entry, get data address from it. * Point to the leaf entry, get data address from it.
*/ */
lep = &ents[index]; lep = &leafhdr.ents[index];
/* /*
* Point to the data entry. * Point to the data entry.
*/ */
@ -1479,7 +1519,7 @@ xfs_dir2_leaf_replace(
* Put the new inode number in, log it. * Put the new inode number in, log it.
*/ */
dep->inumber = cpu_to_be64(args->inumber); dep->inumber = cpu_to_be64(args->inumber);
dp->d_ops->data_put_ftype(dep, args->filetype); xfs_dir2_data_put_ftype(dp->i_mount, dep, args->filetype);
tp = args->trans; tp = args->trans;
xfs_dir2_data_log_entry(args, dbp, dep); xfs_dir2_data_log_entry(args, dbp, dep);
xfs_dir3_leaf_check(dp, lbp); xfs_dir3_leaf_check(dp, lbp);
@ -1501,21 +1541,17 @@ xfs_dir2_leaf_search_hash(
xfs_dahash_t hashwant; /* hash value looking for */ xfs_dahash_t hashwant; /* hash value looking for */
int high; /* high leaf index */ int high; /* high leaf index */
int low; /* low leaf index */ int low; /* low leaf index */
xfs_dir2_leaf_t *leaf; /* leaf structure */
xfs_dir2_leaf_entry_t *lep; /* leaf entry */ xfs_dir2_leaf_entry_t *lep; /* leaf entry */
int mid=0; /* current leaf index */ int mid=0; /* current leaf index */
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
leaf = lbp->b_addr; xfs_dir2_leaf_hdr_from_disk(args->dp->i_mount, &leafhdr, lbp->b_addr);
ents = args->dp->d_ops->leaf_ents_p(leaf);
args->dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf);
/* /*
* Note, the table cannot be empty, so we have to go through the loop. * Note, the table cannot be empty, so we have to go through the loop.
* Binary search the leaf entries looking for our hash value. * Binary search the leaf entries looking for our hash value.
*/ */
for (lep = ents, low = 0, high = leafhdr.count - 1, for (lep = leafhdr.ents, low = 0, high = leafhdr.count - 1,
hashwant = args->hashval; hashwant = args->hashval;
low <= high; ) { low <= high; ) {
mid = (low + high) >> 1; mid = (low + high) >> 1;
@ -1552,6 +1588,7 @@ xfs_dir2_leaf_trim_data(
struct xfs_buf *lbp, /* leaf buffer */ struct xfs_buf *lbp, /* leaf buffer */
xfs_dir2_db_t db) /* data block number */ xfs_dir2_db_t db) /* data block number */
{ {
struct xfs_da_geometry *geo = args->geo;
__be16 *bestsp; /* leaf bests table */ __be16 *bestsp; /* leaf bests table */
struct xfs_buf *dbp; /* data block buffer */ struct xfs_buf *dbp; /* data block buffer */
xfs_inode_t *dp; /* incore directory inode */ xfs_inode_t *dp; /* incore directory inode */
@ -1565,23 +1602,23 @@ xfs_dir2_leaf_trim_data(
/* /*
* Read the offending data block. We need its buffer. * Read the offending data block. We need its buffer.
*/ */
error = xfs_dir3_data_read(tp, dp, xfs_dir2_db_to_da(args->geo, db), error = xfs_dir3_data_read(tp, dp, xfs_dir2_db_to_da(geo, db), 0, &dbp);
-1, &dbp);
if (error) if (error)
return error; return error;
leaf = lbp->b_addr; leaf = lbp->b_addr;
ltp = xfs_dir2_leaf_tail_p(args->geo, leaf); ltp = xfs_dir2_leaf_tail_p(geo, leaf);
#ifdef DEBUG #ifdef DEBUG
{ {
struct xfs_dir2_data_hdr *hdr = dbp->b_addr; struct xfs_dir2_data_hdr *hdr = dbp->b_addr;
struct xfs_dir2_data_free *bf = dp->d_ops->data_bestfree_p(hdr); struct xfs_dir2_data_free *bf =
xfs_dir2_data_bestfree_p(dp->i_mount, hdr);
ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) || ASSERT(hdr->magic == cpu_to_be32(XFS_DIR2_DATA_MAGIC) ||
hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC)); hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC));
ASSERT(be16_to_cpu(bf[0].length) == ASSERT(be16_to_cpu(bf[0].length) ==
args->geo->blksize - dp->d_ops->data_entry_offset); geo->blksize - geo->data_entry_offset);
ASSERT(db == be32_to_cpu(ltp->bestcount) - 1); ASSERT(db == be32_to_cpu(ltp->bestcount) - 1);
} }
#endif #endif
@ -1639,7 +1676,6 @@ xfs_dir2_node_to_leaf(
int error; /* error return code */ int error; /* error return code */
struct xfs_buf *fbp; /* buffer for freespace block */ struct xfs_buf *fbp; /* buffer for freespace block */
xfs_fileoff_t fo; /* freespace file offset */ xfs_fileoff_t fo; /* freespace file offset */
xfs_dir2_free_t *free; /* freespace structure */
struct xfs_buf *lbp; /* buffer for leaf block */ struct xfs_buf *lbp; /* buffer for leaf block */
xfs_dir2_leaf_tail_t *ltp; /* tail of leaf structure */ xfs_dir2_leaf_tail_t *ltp; /* tail of leaf structure */
xfs_dir2_leaf_t *leaf; /* leaf structure */ xfs_dir2_leaf_t *leaf; /* leaf structure */
@ -1697,7 +1733,7 @@ xfs_dir2_node_to_leaf(
return 0; return 0;
lbp = state->path.blk[0].bp; lbp = state->path.blk[0].bp;
leaf = lbp->b_addr; leaf = lbp->b_addr;
dp->d_ops->leaf_hdr_from_disk(&leafhdr, leaf); xfs_dir2_leaf_hdr_from_disk(mp, &leafhdr, leaf);
ASSERT(leafhdr.magic == XFS_DIR2_LEAFN_MAGIC || ASSERT(leafhdr.magic == XFS_DIR2_LEAFN_MAGIC ||
leafhdr.magic == XFS_DIR3_LEAFN_MAGIC); leafhdr.magic == XFS_DIR3_LEAFN_MAGIC);
@ -1708,8 +1744,7 @@ xfs_dir2_node_to_leaf(
error = xfs_dir2_free_read(tp, dp, args->geo->freeblk, &fbp); error = xfs_dir2_free_read(tp, dp, args->geo->freeblk, &fbp);
if (error) if (error)
return error; return error;
free = fbp->b_addr; xfs_dir2_free_hdr_from_disk(mp, &freehdr, fbp->b_addr);
dp->d_ops->free_hdr_from_disk(&freehdr, free);
ASSERT(!freehdr.firstdb); ASSERT(!freehdr.firstdb);
@ -1743,10 +1778,10 @@ xfs_dir2_node_to_leaf(
/* /*
* Set up the leaf bests table. * Set up the leaf bests table.
*/ */
memcpy(xfs_dir2_leaf_bests_p(ltp), dp->d_ops->free_bests_p(free), memcpy(xfs_dir2_leaf_bests_p(ltp), freehdr.bests,
freehdr.nvalid * sizeof(xfs_dir2_data_off_t)); freehdr.nvalid * sizeof(xfs_dir2_data_off_t));
dp->d_ops->leaf_hdr_to_disk(leaf, &leafhdr); xfs_dir2_leaf_hdr_to_disk(mp, leaf, &leafhdr);
xfs_dir3_leaf_log_header(args, lbp); xfs_dir3_leaf_log_header(args, lbp);
xfs_dir3_leaf_log_bests(args, lbp, 0, be32_to_cpu(ltp->bestcount) - 1); xfs_dir3_leaf_log_bests(args, lbp, 0, be32_to_cpu(ltp->bestcount) - 1);
xfs_dir3_leaf_log_tail(args, lbp); xfs_dir3_leaf_log_tail(args, lbp);

File diff suppressed because it is too large Load Diff

View File

@ -8,7 +8,41 @@
struct dir_context; struct dir_context;
/*
* In-core version of the leaf and free block headers to abstract the
* differences in the v2 and v3 disk format of the headers.
*/
struct xfs_dir3_icleaf_hdr {
uint32_t forw;
uint32_t back;
uint16_t magic;
uint16_t count;
uint16_t stale;
/*
* Pointer to the on-disk format entries, which are behind the
* variable size (v4 vs v5) header in the on-disk block.
*/
struct xfs_dir2_leaf_entry *ents;
};
struct xfs_dir3_icfree_hdr {
uint32_t magic;
uint32_t firstdb;
uint32_t nvalid;
uint32_t nused;
/*
* Pointer to the on-disk format entries, which are behind the
* variable size (v4 vs v5) header in the on-disk block.
*/
__be16 *bests;
};
/* xfs_dir2.c */ /* xfs_dir2.c */
xfs_dahash_t xfs_ascii_ci_hashname(struct xfs_name *name);
enum xfs_dacmp xfs_ascii_ci_compname(struct xfs_da_args *args,
const unsigned char *name, int len);
extern int xfs_dir2_grow_inode(struct xfs_da_args *args, int space, extern int xfs_dir2_grow_inode(struct xfs_da_args *args, int space,
xfs_dir2_db_t *dbp); xfs_dir2_db_t *dbp);
extern int xfs_dir_cilookup_result(struct xfs_da_args *args, extern int xfs_dir_cilookup_result(struct xfs_da_args *args,
@ -26,6 +60,15 @@ extern int xfs_dir2_leaf_to_block(struct xfs_da_args *args,
struct xfs_buf *lbp, struct xfs_buf *dbp); struct xfs_buf *lbp, struct xfs_buf *dbp);
/* xfs_dir2_data.c */ /* xfs_dir2_data.c */
struct xfs_dir2_data_free *xfs_dir2_data_bestfree_p(struct xfs_mount *mp,
struct xfs_dir2_data_hdr *hdr);
__be16 *xfs_dir2_data_entry_tag_p(struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep);
uint8_t xfs_dir2_data_get_ftype(struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep);
void xfs_dir2_data_put_ftype(struct xfs_mount *mp,
struct xfs_dir2_data_entry *dep, uint8_t ftype);
#ifdef DEBUG #ifdef DEBUG
extern void xfs_dir3_data_check(struct xfs_inode *dp, struct xfs_buf *bp); extern void xfs_dir3_data_check(struct xfs_inode *dp, struct xfs_buf *bp);
#else #else
@ -34,10 +77,10 @@ extern void xfs_dir3_data_check(struct xfs_inode *dp, struct xfs_buf *bp);
extern xfs_failaddr_t __xfs_dir3_data_check(struct xfs_inode *dp, extern xfs_failaddr_t __xfs_dir3_data_check(struct xfs_inode *dp,
struct xfs_buf *bp); struct xfs_buf *bp);
extern int xfs_dir3_data_read(struct xfs_trans *tp, struct xfs_inode *dp, int xfs_dir3_data_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t bno, xfs_daddr_t mapped_bno, struct xfs_buf **bpp); xfs_dablk_t bno, unsigned int flags, struct xfs_buf **bpp);
extern int xfs_dir3_data_readahead(struct xfs_inode *dp, xfs_dablk_t bno, int xfs_dir3_data_readahead(struct xfs_inode *dp, xfs_dablk_t bno,
xfs_daddr_t mapped_bno); unsigned int flags);
extern struct xfs_dir2_data_free * extern struct xfs_dir2_data_free *
xfs_dir2_data_freeinsert(struct xfs_dir2_data_hdr *hdr, xfs_dir2_data_freeinsert(struct xfs_dir2_data_hdr *hdr,
@ -47,10 +90,14 @@ extern int xfs_dir3_data_init(struct xfs_da_args *args, xfs_dir2_db_t blkno,
struct xfs_buf **bpp); struct xfs_buf **bpp);
/* xfs_dir2_leaf.c */ /* xfs_dir2_leaf.c */
extern int xfs_dir3_leaf_read(struct xfs_trans *tp, struct xfs_inode *dp, void xfs_dir2_leaf_hdr_from_disk(struct xfs_mount *mp,
xfs_dablk_t fbno, xfs_daddr_t mappedbno, struct xfs_buf **bpp); struct xfs_dir3_icleaf_hdr *to, struct xfs_dir2_leaf *from);
extern int xfs_dir3_leafn_read(struct xfs_trans *tp, struct xfs_inode *dp, void xfs_dir2_leaf_hdr_to_disk(struct xfs_mount *mp, struct xfs_dir2_leaf *to,
xfs_dablk_t fbno, xfs_daddr_t mappedbno, struct xfs_buf **bpp); struct xfs_dir3_icleaf_hdr *from);
int xfs_dir3_leaf_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t fbno, struct xfs_buf **bpp);
int xfs_dir3_leafn_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t fbno, struct xfs_buf **bpp);
extern int xfs_dir2_block_to_leaf(struct xfs_da_args *args, extern int xfs_dir2_block_to_leaf(struct xfs_da_args *args,
struct xfs_buf *dbp); struct xfs_buf *dbp);
extern int xfs_dir2_leaf_addname(struct xfs_da_args *args); extern int xfs_dir2_leaf_addname(struct xfs_da_args *args);
@ -62,7 +109,8 @@ extern void xfs_dir3_leaf_compact_x1(struct xfs_dir3_icleaf_hdr *leafhdr,
extern int xfs_dir3_leaf_get_buf(struct xfs_da_args *args, xfs_dir2_db_t bno, extern int xfs_dir3_leaf_get_buf(struct xfs_da_args *args, xfs_dir2_db_t bno,
struct xfs_buf **bpp, uint16_t magic); struct xfs_buf **bpp, uint16_t magic);
extern void xfs_dir3_leaf_log_ents(struct xfs_da_args *args, extern void xfs_dir3_leaf_log_ents(struct xfs_da_args *args,
struct xfs_buf *bp, int first, int last); struct xfs_dir3_icleaf_hdr *hdr, struct xfs_buf *bp, int first,
int last);
extern void xfs_dir3_leaf_log_header(struct xfs_da_args *args, extern void xfs_dir3_leaf_log_header(struct xfs_da_args *args,
struct xfs_buf *bp); struct xfs_buf *bp);
extern int xfs_dir2_leaf_lookup(struct xfs_da_args *args); extern int xfs_dir2_leaf_lookup(struct xfs_da_args *args);
@ -79,10 +127,11 @@ xfs_dir3_leaf_find_entry(struct xfs_dir3_icleaf_hdr *leafhdr,
extern int xfs_dir2_node_to_leaf(struct xfs_da_state *state); extern int xfs_dir2_node_to_leaf(struct xfs_da_state *state);
extern xfs_failaddr_t xfs_dir3_leaf_check_int(struct xfs_mount *mp, extern xfs_failaddr_t xfs_dir3_leaf_check_int(struct xfs_mount *mp,
struct xfs_inode *dp, struct xfs_dir3_icleaf_hdr *hdr, struct xfs_dir3_icleaf_hdr *hdr, struct xfs_dir2_leaf *leaf);
struct xfs_dir2_leaf *leaf);
/* xfs_dir2_node.c */ /* xfs_dir2_node.c */
void xfs_dir2_free_hdr_from_disk(struct xfs_mount *mp,
struct xfs_dir3_icfree_hdr *to, struct xfs_dir2_free *from);
extern int xfs_dir2_leaf_to_node(struct xfs_da_args *args, extern int xfs_dir2_leaf_to_node(struct xfs_da_args *args,
struct xfs_buf *lbp); struct xfs_buf *lbp);
extern xfs_dahash_t xfs_dir2_leaf_lasthash(struct xfs_inode *dp, extern xfs_dahash_t xfs_dir2_leaf_lasthash(struct xfs_inode *dp,
@ -108,6 +157,14 @@ extern int xfs_dir2_free_read(struct xfs_trans *tp, struct xfs_inode *dp,
xfs_dablk_t fbno, struct xfs_buf **bpp); xfs_dablk_t fbno, struct xfs_buf **bpp);
/* xfs_dir2_sf.c */ /* xfs_dir2_sf.c */
xfs_ino_t xfs_dir2_sf_get_ino(struct xfs_mount *mp, struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep);
xfs_ino_t xfs_dir2_sf_get_parent_ino(struct xfs_dir2_sf_hdr *hdr);
void xfs_dir2_sf_put_parent_ino(struct xfs_dir2_sf_hdr *hdr, xfs_ino_t ino);
uint8_t xfs_dir2_sf_get_ftype(struct xfs_mount *mp,
struct xfs_dir2_sf_entry *sfep);
struct xfs_dir2_sf_entry *xfs_dir2_sf_nextentry(struct xfs_mount *mp,
struct xfs_dir2_sf_hdr *hdr, struct xfs_dir2_sf_entry *sfep);
extern int xfs_dir2_block_sfsize(struct xfs_inode *dp, extern int xfs_dir2_block_sfsize(struct xfs_inode *dp,
struct xfs_dir2_data_hdr *block, struct xfs_dir2_sf_hdr *sfhp); struct xfs_dir2_data_hdr *block, struct xfs_dir2_sf_hdr *sfhp);
extern int xfs_dir2_block_to_sf(struct xfs_da_args *args, struct xfs_buf *bp, extern int xfs_dir2_block_to_sf(struct xfs_da_args *args, struct xfs_buf *bp,
@ -123,4 +180,39 @@ extern xfs_failaddr_t xfs_dir2_sf_verify(struct xfs_inode *ip);
extern int xfs_readdir(struct xfs_trans *tp, struct xfs_inode *dp, extern int xfs_readdir(struct xfs_trans *tp, struct xfs_inode *dp,
struct dir_context *ctx, size_t bufsize); struct dir_context *ctx, size_t bufsize);
static inline unsigned int
xfs_dir2_data_entsize(
struct xfs_mount *mp,
unsigned int namelen)
{
unsigned int len;
len = offsetof(struct xfs_dir2_data_entry, name[0]) + namelen +
sizeof(xfs_dir2_data_off_t) /* tag */;
if (xfs_sb_version_hasftype(&mp->m_sb))
len += sizeof(uint8_t);
return round_up(len, XFS_DIR2_DATA_ALIGN);
}
static inline xfs_dahash_t
xfs_dir2_hashname(
struct xfs_mount *mp,
struct xfs_name *name)
{
if (unlikely(xfs_sb_version_hasasciici(&mp->m_sb)))
return xfs_ascii_ci_hashname(name);
return xfs_da_hashname(name->name, name->len);
}
static inline enum xfs_dacmp
xfs_dir2_compname(
struct xfs_da_args *args,
const unsigned char *name,
int len)
{
if (unlikely(xfs_sb_version_hasasciici(&args->dp->i_mount->m_sb)))
return xfs_ascii_ci_compname(args, name, len);
return xfs_da_compname(args, name, len);
}
#endif /* __XFS_DIR2_PRIV_H__ */ #endif /* __XFS_DIR2_PRIV_H__ */

View File

@ -37,6 +37,126 @@ static void xfs_dir2_sf_check(xfs_da_args_t *args);
static void xfs_dir2_sf_toino4(xfs_da_args_t *args); static void xfs_dir2_sf_toino4(xfs_da_args_t *args);
static void xfs_dir2_sf_toino8(xfs_da_args_t *args); static void xfs_dir2_sf_toino8(xfs_da_args_t *args);
static int
xfs_dir2_sf_entsize(
struct xfs_mount *mp,
struct xfs_dir2_sf_hdr *hdr,
int len)
{
int count = len;
count += sizeof(struct xfs_dir2_sf_entry); /* namelen + offset */
count += hdr->i8count ? XFS_INO64_SIZE : XFS_INO32_SIZE; /* ino # */
if (xfs_sb_version_hasftype(&mp->m_sb))
count += sizeof(uint8_t);
return count;
}
struct xfs_dir2_sf_entry *
xfs_dir2_sf_nextentry(
struct xfs_mount *mp,
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
return (void *)sfep + xfs_dir2_sf_entsize(mp, hdr, sfep->namelen);
}
/*
* In short-form directory entries the inode numbers are stored at variable
* offset behind the entry name. If the entry stores a filetype value, then it
* sits between the name and the inode number. The actual inode numbers can
* come in two formats as well, either 4 bytes or 8 bytes wide.
*/
xfs_ino_t
xfs_dir2_sf_get_ino(
struct xfs_mount *mp,
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep)
{
uint8_t *from = sfep->name + sfep->namelen;
if (xfs_sb_version_hasftype(&mp->m_sb))
from++;
if (!hdr->i8count)
return get_unaligned_be32(from);
return get_unaligned_be64(from) & XFS_MAXINUMBER;
}
static void
xfs_dir2_sf_put_ino(
struct xfs_mount *mp,
struct xfs_dir2_sf_hdr *hdr,
struct xfs_dir2_sf_entry *sfep,
xfs_ino_t ino)
{
uint8_t *to = sfep->name + sfep->namelen;
ASSERT(ino <= XFS_MAXINUMBER);
if (xfs_sb_version_hasftype(&mp->m_sb))
to++;
if (hdr->i8count)
put_unaligned_be64(ino, to);
else
put_unaligned_be32(ino, to);
}
xfs_ino_t
xfs_dir2_sf_get_parent_ino(
struct xfs_dir2_sf_hdr *hdr)
{
if (!hdr->i8count)
return get_unaligned_be32(hdr->parent);
return get_unaligned_be64(hdr->parent) & XFS_MAXINUMBER;
}
void
xfs_dir2_sf_put_parent_ino(
struct xfs_dir2_sf_hdr *hdr,
xfs_ino_t ino)
{
ASSERT(ino <= XFS_MAXINUMBER);
if (hdr->i8count)
put_unaligned_be64(ino, hdr->parent);
else
put_unaligned_be32(ino, hdr->parent);
}
/*
* The file type field is stored at the end of the name for filetype enabled
* shortform directories, or not at all otherwise.
*/
uint8_t
xfs_dir2_sf_get_ftype(
struct xfs_mount *mp,
struct xfs_dir2_sf_entry *sfep)
{
if (xfs_sb_version_hasftype(&mp->m_sb)) {
uint8_t ftype = sfep->name[sfep->namelen];
if (ftype < XFS_DIR3_FT_MAX)
return ftype;
}
return XFS_DIR3_FT_UNKNOWN;
}
static void
xfs_dir2_sf_put_ftype(
struct xfs_mount *mp,
struct xfs_dir2_sf_entry *sfep,
uint8_t ftype)
{
ASSERT(ftype < XFS_DIR3_FT_MAX);
if (xfs_sb_version_hasftype(&mp->m_sb))
sfep->name[sfep->namelen] = ftype;
}
/* /*
* Given a block directory (dp/block), calculate its size as a shortform (sf) * Given a block directory (dp/block), calculate its size as a shortform (sf)
* directory and a header for the sf directory, if it will fit it the * directory and a header for the sf directory, if it will fit it the
@ -125,7 +245,7 @@ xfs_dir2_block_sfsize(
*/ */
sfhp->count = count; sfhp->count = count;
sfhp->i8count = i8count; sfhp->i8count = i8count;
dp->d_ops->sf_put_parent_ino(sfhp, parent); xfs_dir2_sf_put_parent_ino(sfhp, parent);
return size; return size;
} }
@ -135,64 +255,48 @@ xfs_dir2_block_sfsize(
*/ */
int /* error */ int /* error */
xfs_dir2_block_to_sf( xfs_dir2_block_to_sf(
xfs_da_args_t *args, /* operation arguments */ struct xfs_da_args *args, /* operation arguments */
struct xfs_buf *bp, struct xfs_buf *bp,
int size, /* shortform directory size */ int size, /* shortform directory size */
xfs_dir2_sf_hdr_t *sfhp) /* shortform directory hdr */ struct xfs_dir2_sf_hdr *sfhp) /* shortform directory hdr */
{ {
xfs_dir2_data_hdr_t *hdr; /* block header */ struct xfs_inode *dp = args->dp;
xfs_dir2_data_entry_t *dep; /* data entry pointer */ struct xfs_mount *mp = dp->i_mount;
xfs_inode_t *dp; /* incore directory inode */
xfs_dir2_data_unused_t *dup; /* unused data pointer */
char *endptr; /* end of data entries */
int error; /* error return value */ int error; /* error return value */
int logflags; /* inode logging flags */ int logflags; /* inode logging flags */
xfs_mount_t *mp; /* filesystem mount point */ struct xfs_dir2_sf_entry *sfep; /* shortform entry */
char *ptr; /* current data pointer */ struct xfs_dir2_sf_hdr *sfp; /* shortform directory header */
xfs_dir2_sf_entry_t *sfep; /* shortform entry */ unsigned int offset = args->geo->data_entry_offset;
xfs_dir2_sf_hdr_t *sfp; /* shortform directory header */ unsigned int end;
xfs_dir2_sf_hdr_t *dst; /* temporary data buffer */
trace_xfs_dir2_block_to_sf(args); trace_xfs_dir2_block_to_sf(args);
dp = args->dp;
mp = dp->i_mount;
/* /*
* allocate a temporary destination buffer the size of the inode * Allocate a temporary destination buffer the size of the inode to
* to format the data into. Once we have formatted the data, we * format the data into. Once we have formatted the data, we can free
* can free the block and copy the formatted data into the inode literal * the block and copy the formatted data into the inode literal area.
* area.
*/ */
dst = kmem_alloc(mp->m_sb.sb_inodesize, 0); sfp = kmem_alloc(mp->m_sb.sb_inodesize, 0);
hdr = bp->b_addr;
/*
* Copy the header into the newly allocate local space.
*/
sfp = (xfs_dir2_sf_hdr_t *)dst;
memcpy(sfp, sfhp, xfs_dir2_sf_hdr_size(sfhp->i8count)); memcpy(sfp, sfhp, xfs_dir2_sf_hdr_size(sfhp->i8count));
/* /*
* Set up to loop over the block's entries. * Loop over the active and unused entries. Stop when we reach the
* leaf/tail portion of the block.
*/ */
ptr = (char *)dp->d_ops->data_entry_p(hdr); end = xfs_dir3_data_end_offset(args->geo, bp->b_addr);
endptr = xfs_dir3_data_endp(args->geo, hdr);
sfep = xfs_dir2_sf_firstentry(sfp); sfep = xfs_dir2_sf_firstentry(sfp);
/* while (offset < end) {
* Loop over the active and unused entries. struct xfs_dir2_data_unused *dup = bp->b_addr + offset;
* Stop when we reach the leaf/tail portion of the block. struct xfs_dir2_data_entry *dep = bp->b_addr + offset;
*/
while (ptr < endptr) {
/* /*
* If it's unused, just skip over it. * If it's unused, just skip over it.
*/ */
dup = (xfs_dir2_data_unused_t *)ptr;
if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
ptr += be16_to_cpu(dup->length); offset += be16_to_cpu(dup->length);
continue; continue;
} }
dep = (xfs_dir2_data_entry_t *)ptr;
/* /*
* Skip . * Skip .
*/ */
@ -204,24 +308,22 @@ xfs_dir2_block_to_sf(
else if (dep->namelen == 2 && else if (dep->namelen == 2 &&
dep->name[0] == '.' && dep->name[1] == '.') dep->name[0] == '.' && dep->name[1] == '.')
ASSERT(be64_to_cpu(dep->inumber) == ASSERT(be64_to_cpu(dep->inumber) ==
dp->d_ops->sf_get_parent_ino(sfp)); xfs_dir2_sf_get_parent_ino(sfp));
/* /*
* Normal entry, copy it into shortform. * Normal entry, copy it into shortform.
*/ */
else { else {
sfep->namelen = dep->namelen; sfep->namelen = dep->namelen;
xfs_dir2_sf_put_offset(sfep, xfs_dir2_sf_put_offset(sfep, offset);
(xfs_dir2_data_aoff_t)
((char *)dep - (char *)hdr));
memcpy(sfep->name, dep->name, dep->namelen); memcpy(sfep->name, dep->name, dep->namelen);
dp->d_ops->sf_put_ino(sfp, sfep, xfs_dir2_sf_put_ino(mp, sfp, sfep,
be64_to_cpu(dep->inumber)); be64_to_cpu(dep->inumber));
dp->d_ops->sf_put_ftype(sfep, xfs_dir2_sf_put_ftype(mp, sfep,
dp->d_ops->data_get_ftype(dep)); xfs_dir2_data_get_ftype(mp, dep));
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
} }
ptr += dp->d_ops->data_entsize(dep->namelen); offset += xfs_dir2_data_entsize(mp, dep->namelen);
} }
ASSERT((char *)sfep - (char *)sfp == size); ASSERT((char *)sfep - (char *)sfp == size);
@ -240,7 +342,7 @@ xfs_dir2_block_to_sf(
* Convert the inode to local format and copy the data in. * Convert the inode to local format and copy the data in.
*/ */
ASSERT(dp->i_df.if_bytes == 0); ASSERT(dp->i_df.if_bytes == 0);
xfs_init_local_fork(dp, XFS_DATA_FORK, dst, size); xfs_init_local_fork(dp, XFS_DATA_FORK, sfp, size);
dp->i_d.di_format = XFS_DINODE_FMT_LOCAL; dp->i_d.di_format = XFS_DINODE_FMT_LOCAL;
dp->i_d.di_size = size; dp->i_d.di_size = size;
@ -248,7 +350,7 @@ xfs_dir2_block_to_sf(
xfs_dir2_sf_check(args); xfs_dir2_sf_check(args);
out: out:
xfs_trans_log_inode(args->trans, dp, logflags); xfs_trans_log_inode(args->trans, dp, logflags);
kmem_free(dst); kmem_free(sfp);
return error; return error;
} }
@ -277,13 +379,7 @@ xfs_dir2_sf_addname(
ASSERT(xfs_dir2_sf_lookup(args) == -ENOENT); ASSERT(xfs_dir2_sf_lookup(args) == -ENOENT);
dp = args->dp; dp = args->dp;
ASSERT(dp->i_df.if_flags & XFS_IFINLINE); ASSERT(dp->i_df.if_flags & XFS_IFINLINE);
/* ASSERT(dp->i_d.di_size >= offsetof(struct xfs_dir2_sf_hdr, parent));
* Make sure the shortform value has some of its header.
*/
if (dp->i_d.di_size < offsetof(xfs_dir2_sf_hdr_t, parent)) {
ASSERT(XFS_FORCED_SHUTDOWN(dp->i_mount));
return -EIO;
}
ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_bytes == dp->i_d.di_size);
ASSERT(dp->i_df.if_u1.if_data != NULL); ASSERT(dp->i_df.if_u1.if_data != NULL);
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
@ -291,7 +387,7 @@ xfs_dir2_sf_addname(
/* /*
* Compute entry (and change in) size. * Compute entry (and change in) size.
*/ */
incr_isize = dp->d_ops->sf_entsize(sfp, args->namelen); incr_isize = xfs_dir2_sf_entsize(dp->i_mount, sfp, args->namelen);
objchange = 0; objchange = 0;
/* /*
@ -364,18 +460,17 @@ xfs_dir2_sf_addname_easy(
xfs_dir2_data_aoff_t offset, /* offset to use for new ent */ xfs_dir2_data_aoff_t offset, /* offset to use for new ent */
int new_isize) /* new directory size */ int new_isize) /* new directory size */
{ {
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int byteoff; /* byte offset in sf dir */ int byteoff; /* byte offset in sf dir */
xfs_inode_t *dp; /* incore directory inode */
xfs_dir2_sf_hdr_t *sfp; /* shortform structure */ xfs_dir2_sf_hdr_t *sfp; /* shortform structure */
dp = args->dp;
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
byteoff = (int)((char *)sfep - (char *)sfp); byteoff = (int)((char *)sfep - (char *)sfp);
/* /*
* Grow the in-inode space. * Grow the in-inode space.
*/ */
xfs_idata_realloc(dp, dp->d_ops->sf_entsize(sfp, args->namelen), xfs_idata_realloc(dp, xfs_dir2_sf_entsize(mp, sfp, args->namelen),
XFS_DATA_FORK); XFS_DATA_FORK);
/* /*
* Need to set up again due to realloc of the inode data. * Need to set up again due to realloc of the inode data.
@ -388,8 +483,8 @@ xfs_dir2_sf_addname_easy(
sfep->namelen = args->namelen; sfep->namelen = args->namelen;
xfs_dir2_sf_put_offset(sfep, offset); xfs_dir2_sf_put_offset(sfep, offset);
memcpy(sfep->name, args->name, sfep->namelen); memcpy(sfep->name, args->name, sfep->namelen);
dp->d_ops->sf_put_ino(sfp, sfep, args->inumber); xfs_dir2_sf_put_ino(mp, sfp, sfep, args->inumber);
dp->d_ops->sf_put_ftype(sfep, args->filetype); xfs_dir2_sf_put_ftype(mp, sfep, args->filetype);
/* /*
* Update the header and inode. * Update the header and inode.
@ -416,9 +511,10 @@ xfs_dir2_sf_addname_hard(
int objchange, /* changing inode number size */ int objchange, /* changing inode number size */
int new_isize) /* new directory size */ int new_isize) /* new directory size */
{ {
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int add_datasize; /* data size need for new ent */ int add_datasize; /* data size need for new ent */
char *buf; /* buffer for old */ char *buf; /* buffer for old */
xfs_inode_t *dp; /* incore directory inode */
int eof; /* reached end of old dir */ int eof; /* reached end of old dir */
int nbytes; /* temp for byte copies */ int nbytes; /* temp for byte copies */
xfs_dir2_data_aoff_t new_offset; /* next offset value */ xfs_dir2_data_aoff_t new_offset; /* next offset value */
@ -432,8 +528,6 @@ xfs_dir2_sf_addname_hard(
/* /*
* Copy the old directory to the stack buffer. * Copy the old directory to the stack buffer.
*/ */
dp = args->dp;
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
old_isize = (int)dp->i_d.di_size; old_isize = (int)dp->i_d.di_size;
buf = kmem_alloc(old_isize, 0); buf = kmem_alloc(old_isize, 0);
@ -444,13 +538,13 @@ xfs_dir2_sf_addname_hard(
* to insert the new entry. * to insert the new entry.
* If it's going to end up at the end then oldsfep will point there. * If it's going to end up at the end then oldsfep will point there.
*/ */
for (offset = dp->d_ops->data_first_offset, for (offset = args->geo->data_first_offset,
oldsfep = xfs_dir2_sf_firstentry(oldsfp), oldsfep = xfs_dir2_sf_firstentry(oldsfp),
add_datasize = dp->d_ops->data_entsize(args->namelen), add_datasize = xfs_dir2_data_entsize(mp, args->namelen),
eof = (char *)oldsfep == &buf[old_isize]; eof = (char *)oldsfep == &buf[old_isize];
!eof; !eof;
offset = new_offset + dp->d_ops->data_entsize(oldsfep->namelen), offset = new_offset + xfs_dir2_data_entsize(mp, oldsfep->namelen),
oldsfep = dp->d_ops->sf_nextentry(oldsfp, oldsfep), oldsfep = xfs_dir2_sf_nextentry(mp, oldsfp, oldsfep),
eof = (char *)oldsfep == &buf[old_isize]) { eof = (char *)oldsfep == &buf[old_isize]) {
new_offset = xfs_dir2_sf_get_offset(oldsfep); new_offset = xfs_dir2_sf_get_offset(oldsfep);
if (offset + add_datasize <= new_offset) if (offset + add_datasize <= new_offset)
@ -479,8 +573,8 @@ xfs_dir2_sf_addname_hard(
sfep->namelen = args->namelen; sfep->namelen = args->namelen;
xfs_dir2_sf_put_offset(sfep, offset); xfs_dir2_sf_put_offset(sfep, offset);
memcpy(sfep->name, args->name, sfep->namelen); memcpy(sfep->name, args->name, sfep->namelen);
dp->d_ops->sf_put_ino(sfp, sfep, args->inumber); xfs_dir2_sf_put_ino(mp, sfp, sfep, args->inumber);
dp->d_ops->sf_put_ftype(sfep, args->filetype); xfs_dir2_sf_put_ftype(mp, sfep, args->filetype);
sfp->count++; sfp->count++;
if (args->inumber > XFS_DIR2_MAX_SHORT_INUM && !objchange) if (args->inumber > XFS_DIR2_MAX_SHORT_INUM && !objchange)
sfp->i8count++; sfp->i8count++;
@ -488,7 +582,7 @@ xfs_dir2_sf_addname_hard(
* If there's more left to copy, do that. * If there's more left to copy, do that.
*/ */
if (!eof) { if (!eof) {
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
memcpy(sfep, oldsfep, old_isize - nbytes); memcpy(sfep, oldsfep, old_isize - nbytes);
} }
kmem_free(buf); kmem_free(buf);
@ -510,7 +604,8 @@ xfs_dir2_sf_addname_pick(
xfs_dir2_sf_entry_t **sfepp, /* out(1): new entry ptr */ xfs_dir2_sf_entry_t **sfepp, /* out(1): new entry ptr */
xfs_dir2_data_aoff_t *offsetp) /* out(1): new offset */ xfs_dir2_data_aoff_t *offsetp) /* out(1): new offset */
{ {
xfs_inode_t *dp; /* incore directory inode */ struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int holefit; /* found hole it will fit in */ int holefit; /* found hole it will fit in */
int i; /* entry number */ int i; /* entry number */
xfs_dir2_data_aoff_t offset; /* data block offset */ xfs_dir2_data_aoff_t offset; /* data block offset */
@ -519,11 +614,9 @@ xfs_dir2_sf_addname_pick(
int size; /* entry's data size */ int size; /* entry's data size */
int used; /* data bytes used */ int used; /* data bytes used */
dp = args->dp;
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
size = dp->d_ops->data_entsize(args->namelen); size = xfs_dir2_data_entsize(mp, args->namelen);
offset = dp->d_ops->data_first_offset; offset = args->geo->data_first_offset;
sfep = xfs_dir2_sf_firstentry(sfp); sfep = xfs_dir2_sf_firstentry(sfp);
holefit = 0; holefit = 0;
/* /*
@ -535,8 +628,8 @@ xfs_dir2_sf_addname_pick(
if (!holefit) if (!holefit)
holefit = offset + size <= xfs_dir2_sf_get_offset(sfep); holefit = offset + size <= xfs_dir2_sf_get_offset(sfep);
offset = xfs_dir2_sf_get_offset(sfep) + offset = xfs_dir2_sf_get_offset(sfep) +
dp->d_ops->data_entsize(sfep->namelen); xfs_dir2_data_entsize(mp, sfep->namelen);
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
} }
/* /*
* Calculate data bytes used excluding the new entry, if this * Calculate data bytes used excluding the new entry, if this
@ -578,7 +671,8 @@ static void
xfs_dir2_sf_check( xfs_dir2_sf_check(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
xfs_inode_t *dp; /* incore directory inode */ struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int i; /* entry number */ int i; /* entry number */
int i8count; /* number of big inode#s */ int i8count; /* number of big inode#s */
xfs_ino_t ino; /* entry inode number */ xfs_ino_t ino; /* entry inode number */
@ -586,23 +680,21 @@ xfs_dir2_sf_check(
xfs_dir2_sf_entry_t *sfep; /* shortform dir entry */ xfs_dir2_sf_entry_t *sfep; /* shortform dir entry */
xfs_dir2_sf_hdr_t *sfp; /* shortform structure */ xfs_dir2_sf_hdr_t *sfp; /* shortform structure */
dp = args->dp;
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
offset = dp->d_ops->data_first_offset; offset = args->geo->data_first_offset;
ino = dp->d_ops->sf_get_parent_ino(sfp); ino = xfs_dir2_sf_get_parent_ino(sfp);
i8count = ino > XFS_DIR2_MAX_SHORT_INUM; i8count = ino > XFS_DIR2_MAX_SHORT_INUM;
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp);
i < sfp->count; i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep)) { i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep)) {
ASSERT(xfs_dir2_sf_get_offset(sfep) >= offset); ASSERT(xfs_dir2_sf_get_offset(sfep) >= offset);
ino = dp->d_ops->sf_get_ino(sfp, sfep); ino = xfs_dir2_sf_get_ino(mp, sfp, sfep);
i8count += ino > XFS_DIR2_MAX_SHORT_INUM; i8count += ino > XFS_DIR2_MAX_SHORT_INUM;
offset = offset =
xfs_dir2_sf_get_offset(sfep) + xfs_dir2_sf_get_offset(sfep) +
dp->d_ops->data_entsize(sfep->namelen); xfs_dir2_data_entsize(mp, sfep->namelen);
ASSERT(dp->d_ops->sf_get_ftype(sfep) < XFS_DIR3_FT_MAX); ASSERT(xfs_dir2_sf_get_ftype(mp, sfep) < XFS_DIR3_FT_MAX);
} }
ASSERT(i8count == sfp->i8count); ASSERT(i8count == sfp->i8count);
ASSERT((char *)sfep - (char *)sfp == dp->i_d.di_size); ASSERT((char *)sfep - (char *)sfp == dp->i_d.di_size);
@ -622,22 +714,16 @@ xfs_dir2_sf_verify(
struct xfs_dir2_sf_entry *sfep; struct xfs_dir2_sf_entry *sfep;
struct xfs_dir2_sf_entry *next_sfep; struct xfs_dir2_sf_entry *next_sfep;
char *endp; char *endp;
const struct xfs_dir_ops *dops;
struct xfs_ifork *ifp; struct xfs_ifork *ifp;
xfs_ino_t ino; xfs_ino_t ino;
int i; int i;
int i8count; int i8count;
int offset; int offset;
int size; int64_t size;
int error; int error;
uint8_t filetype; uint8_t filetype;
ASSERT(ip->i_d.di_format == XFS_DINODE_FMT_LOCAL); ASSERT(ip->i_d.di_format == XFS_DINODE_FMT_LOCAL);
/*
* xfs_iread calls us before xfs_setup_inode sets up ip->d_ops,
* so we can only trust the mountpoint to have the right pointer.
*/
dops = xfs_dir_get_ops(mp, NULL);
ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK); ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK);
sfp = (struct xfs_dir2_sf_hdr *)ifp->if_u1.if_data; sfp = (struct xfs_dir2_sf_hdr *)ifp->if_u1.if_data;
@ -653,12 +739,12 @@ xfs_dir2_sf_verify(
endp = (char *)sfp + size; endp = (char *)sfp + size;
/* Check .. entry */ /* Check .. entry */
ino = dops->sf_get_parent_ino(sfp); ino = xfs_dir2_sf_get_parent_ino(sfp);
i8count = ino > XFS_DIR2_MAX_SHORT_INUM; i8count = ino > XFS_DIR2_MAX_SHORT_INUM;
error = xfs_dir_ino_validate(mp, ino); error = xfs_dir_ino_validate(mp, ino);
if (error) if (error)
return __this_address; return __this_address;
offset = dops->data_first_offset; offset = mp->m_dir_geo->data_first_offset;
/* Check all reported entries */ /* Check all reported entries */
sfep = xfs_dir2_sf_firstentry(sfp); sfep = xfs_dir2_sf_firstentry(sfp);
@ -680,7 +766,7 @@ xfs_dir2_sf_verify(
* within the data buffer. The next entry starts after the * within the data buffer. The next entry starts after the
* name component, so nextentry is an acceptable test. * name component, so nextentry is an acceptable test.
*/ */
next_sfep = dops->sf_nextentry(sfp, sfep); next_sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
if (endp < (char *)next_sfep) if (endp < (char *)next_sfep)
return __this_address; return __this_address;
@ -689,19 +775,19 @@ xfs_dir2_sf_verify(
return __this_address; return __this_address;
/* Check the inode number. */ /* Check the inode number. */
ino = dops->sf_get_ino(sfp, sfep); ino = xfs_dir2_sf_get_ino(mp, sfp, sfep);
i8count += ino > XFS_DIR2_MAX_SHORT_INUM; i8count += ino > XFS_DIR2_MAX_SHORT_INUM;
error = xfs_dir_ino_validate(mp, ino); error = xfs_dir_ino_validate(mp, ino);
if (error) if (error)
return __this_address; return __this_address;
/* Check the file type. */ /* Check the file type. */
filetype = dops->sf_get_ftype(sfep); filetype = xfs_dir2_sf_get_ftype(mp, sfep);
if (filetype >= XFS_DIR3_FT_MAX) if (filetype >= XFS_DIR3_FT_MAX)
return __this_address; return __this_address;
offset = xfs_dir2_sf_get_offset(sfep) + offset = xfs_dir2_sf_get_offset(sfep) +
dops->data_entsize(sfep->namelen); xfs_dir2_data_entsize(mp, sfep->namelen);
sfep = next_sfep; sfep = next_sfep;
} }
@ -763,7 +849,7 @@ xfs_dir2_sf_create(
/* /*
* Now can put in the inode number, since i8count is set. * Now can put in the inode number, since i8count is set.
*/ */
dp->d_ops->sf_put_parent_ino(sfp, pino); xfs_dir2_sf_put_parent_ino(sfp, pino);
sfp->count = 0; sfp->count = 0;
dp->i_d.di_size = size; dp->i_d.di_size = size;
xfs_dir2_sf_check(args); xfs_dir2_sf_check(args);
@ -779,7 +865,8 @@ int /* error */
xfs_dir2_sf_lookup( xfs_dir2_sf_lookup(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
xfs_inode_t *dp; /* incore directory inode */ struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int i; /* entry index */ int i; /* entry index */
int error; int error;
xfs_dir2_sf_entry_t *sfep; /* shortform directory entry */ xfs_dir2_sf_entry_t *sfep; /* shortform directory entry */
@ -790,16 +877,9 @@ xfs_dir2_sf_lookup(
trace_xfs_dir2_sf_lookup(args); trace_xfs_dir2_sf_lookup(args);
xfs_dir2_sf_check(args); xfs_dir2_sf_check(args);
dp = args->dp;
ASSERT(dp->i_df.if_flags & XFS_IFINLINE); ASSERT(dp->i_df.if_flags & XFS_IFINLINE);
/* ASSERT(dp->i_d.di_size >= offsetof(struct xfs_dir2_sf_hdr, parent));
* Bail out if the directory is way too short.
*/
if (dp->i_d.di_size < offsetof(xfs_dir2_sf_hdr_t, parent)) {
ASSERT(XFS_FORCED_SHUTDOWN(dp->i_mount));
return -EIO;
}
ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_bytes == dp->i_d.di_size);
ASSERT(dp->i_df.if_u1.if_data != NULL); ASSERT(dp->i_df.if_u1.if_data != NULL);
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
@ -818,7 +898,7 @@ xfs_dir2_sf_lookup(
*/ */
if (args->namelen == 2 && if (args->namelen == 2 &&
args->name[0] == '.' && args->name[1] == '.') { args->name[0] == '.' && args->name[1] == '.') {
args->inumber = dp->d_ops->sf_get_parent_ino(sfp); args->inumber = xfs_dir2_sf_get_parent_ino(sfp);
args->cmpresult = XFS_CMP_EXACT; args->cmpresult = XFS_CMP_EXACT;
args->filetype = XFS_DIR3_FT_DIR; args->filetype = XFS_DIR3_FT_DIR;
return -EEXIST; return -EEXIST;
@ -828,18 +908,17 @@ xfs_dir2_sf_lookup(
*/ */
ci_sfep = NULL; ci_sfep = NULL;
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count; for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep)) { i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep)) {
/* /*
* Compare name and if it's an exact match, return the inode * Compare name and if it's an exact match, return the inode
* number. If it's the first case-insensitive match, store the * number. If it's the first case-insensitive match, store the
* inode number and continue looking for an exact match. * inode number and continue looking for an exact match.
*/ */
cmp = dp->i_mount->m_dirnameops->compname(args, sfep->name, cmp = xfs_dir2_compname(args, sfep->name, sfep->namelen);
sfep->namelen);
if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) { if (cmp != XFS_CMP_DIFFERENT && cmp != args->cmpresult) {
args->cmpresult = cmp; args->cmpresult = cmp;
args->inumber = dp->d_ops->sf_get_ino(sfp, sfep); args->inumber = xfs_dir2_sf_get_ino(mp, sfp, sfep);
args->filetype = dp->d_ops->sf_get_ftype(sfep); args->filetype = xfs_dir2_sf_get_ftype(mp, sfep);
if (cmp == XFS_CMP_EXACT) if (cmp == XFS_CMP_EXACT)
return -EEXIST; return -EEXIST;
ci_sfep = sfep; ci_sfep = sfep;
@ -864,8 +943,9 @@ int /* error */
xfs_dir2_sf_removename( xfs_dir2_sf_removename(
xfs_da_args_t *args) xfs_da_args_t *args)
{ {
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int byteoff; /* offset of removed entry */ int byteoff; /* offset of removed entry */
xfs_inode_t *dp; /* incore directory inode */
int entsize; /* this entry's size */ int entsize; /* this entry's size */
int i; /* shortform entry index */ int i; /* shortform entry index */
int newsize; /* new inode size */ int newsize; /* new inode size */
@ -875,17 +955,9 @@ xfs_dir2_sf_removename(
trace_xfs_dir2_sf_removename(args); trace_xfs_dir2_sf_removename(args);
dp = args->dp;
ASSERT(dp->i_df.if_flags & XFS_IFINLINE); ASSERT(dp->i_df.if_flags & XFS_IFINLINE);
oldsize = (int)dp->i_d.di_size; oldsize = (int)dp->i_d.di_size;
/* ASSERT(oldsize >= offsetof(struct xfs_dir2_sf_hdr, parent));
* Bail out if the directory is way too short.
*/
if (oldsize < offsetof(xfs_dir2_sf_hdr_t, parent)) {
ASSERT(XFS_FORCED_SHUTDOWN(dp->i_mount));
return -EIO;
}
ASSERT(dp->i_df.if_bytes == oldsize); ASSERT(dp->i_df.if_bytes == oldsize);
ASSERT(dp->i_df.if_u1.if_data != NULL); ASSERT(dp->i_df.if_u1.if_data != NULL);
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
@ -895,10 +967,10 @@ xfs_dir2_sf_removename(
* Find the one we're deleting. * Find the one we're deleting.
*/ */
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count; for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep)) { i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep)) {
if (xfs_da_compname(args, sfep->name, sfep->namelen) == if (xfs_da_compname(args, sfep->name, sfep->namelen) ==
XFS_CMP_EXACT) { XFS_CMP_EXACT) {
ASSERT(dp->d_ops->sf_get_ino(sfp, sfep) == ASSERT(xfs_dir2_sf_get_ino(mp, sfp, sfep) ==
args->inumber); args->inumber);
break; break;
} }
@ -912,7 +984,7 @@ xfs_dir2_sf_removename(
* Calculate sizes. * Calculate sizes.
*/ */
byteoff = (int)((char *)sfep - (char *)sfp); byteoff = (int)((char *)sfep - (char *)sfp);
entsize = dp->d_ops->sf_entsize(sfp, args->namelen); entsize = xfs_dir2_sf_entsize(mp, sfp, args->namelen);
newsize = oldsize - entsize; newsize = oldsize - entsize;
/* /*
* Copy the part if any after the removed entry, sliding it down. * Copy the part if any after the removed entry, sliding it down.
@ -944,6 +1016,27 @@ xfs_dir2_sf_removename(
return 0; return 0;
} }
/*
* Check whether the sf dir replace operation need more blocks.
*/
bool
xfs_dir2_sf_replace_needblock(
struct xfs_inode *dp,
xfs_ino_t inum)
{
int newsize;
struct xfs_dir2_sf_hdr *sfp;
if (dp->i_d.di_format != XFS_DINODE_FMT_LOCAL)
return false;
sfp = (struct xfs_dir2_sf_hdr *)dp->i_df.if_u1.if_data;
newsize = dp->i_df.if_bytes + (sfp->count + 1) * XFS_INO64_DIFF;
return inum > XFS_DIR2_MAX_SHORT_INUM &&
sfp->i8count == 0 && newsize > XFS_IFORK_DSIZE(dp);
}
/* /*
* Replace the inode number of an entry in a shortform directory. * Replace the inode number of an entry in a shortform directory.
*/ */
@ -951,7 +1044,8 @@ int /* error */
xfs_dir2_sf_replace( xfs_dir2_sf_replace(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
xfs_inode_t *dp; /* incore directory inode */ struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
int i; /* entry index */ int i; /* entry index */
xfs_ino_t ino=0; /* entry old inode number */ xfs_ino_t ino=0; /* entry old inode number */
int i8elevated; /* sf_toino8 set i8count=1 */ int i8elevated; /* sf_toino8 set i8count=1 */
@ -960,16 +1054,8 @@ xfs_dir2_sf_replace(
trace_xfs_dir2_sf_replace(args); trace_xfs_dir2_sf_replace(args);
dp = args->dp;
ASSERT(dp->i_df.if_flags & XFS_IFINLINE); ASSERT(dp->i_df.if_flags & XFS_IFINLINE);
/* ASSERT(dp->i_d.di_size >= offsetof(struct xfs_dir2_sf_hdr, parent));
* Bail out if the shortform directory is way too small.
*/
if (dp->i_d.di_size < offsetof(xfs_dir2_sf_hdr_t, parent)) {
ASSERT(XFS_FORCED_SHUTDOWN(dp->i_mount));
return -EIO;
}
ASSERT(dp->i_df.if_bytes == dp->i_d.di_size); ASSERT(dp->i_df.if_bytes == dp->i_d.di_size);
ASSERT(dp->i_df.if_u1.if_data != NULL); ASSERT(dp->i_df.if_u1.if_data != NULL);
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
@ -980,17 +1066,14 @@ xfs_dir2_sf_replace(
*/ */
if (args->inumber > XFS_DIR2_MAX_SHORT_INUM && sfp->i8count == 0) { if (args->inumber > XFS_DIR2_MAX_SHORT_INUM && sfp->i8count == 0) {
int error; /* error return value */ int error; /* error return value */
int newsize; /* new inode size */
newsize = dp->i_df.if_bytes + (sfp->count + 1) * XFS_INO64_DIFF;
/* /*
* Won't fit as shortform, convert to block then do replace. * Won't fit as shortform, convert to block then do replace.
*/ */
if (newsize > XFS_IFORK_DSIZE(dp)) { if (xfs_dir2_sf_replace_needblock(dp, args->inumber)) {
error = xfs_dir2_sf_to_block(args); error = xfs_dir2_sf_to_block(args);
if (error) { if (error)
return error; return error;
}
return xfs_dir2_block_replace(args); return xfs_dir2_block_replace(args);
} }
/* /*
@ -1008,22 +1091,23 @@ xfs_dir2_sf_replace(
*/ */
if (args->namelen == 2 && if (args->namelen == 2 &&
args->name[0] == '.' && args->name[1] == '.') { args->name[0] == '.' && args->name[1] == '.') {
ino = dp->d_ops->sf_get_parent_ino(sfp); ino = xfs_dir2_sf_get_parent_ino(sfp);
ASSERT(args->inumber != ino); ASSERT(args->inumber != ino);
dp->d_ops->sf_put_parent_ino(sfp, args->inumber); xfs_dir2_sf_put_parent_ino(sfp, args->inumber);
} }
/* /*
* Normal entry, look for the name. * Normal entry, look for the name.
*/ */
else { else {
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count; for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp); i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep)) { i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep)) {
if (xfs_da_compname(args, sfep->name, sfep->namelen) == if (xfs_da_compname(args, sfep->name, sfep->namelen) ==
XFS_CMP_EXACT) { XFS_CMP_EXACT) {
ino = dp->d_ops->sf_get_ino(sfp, sfep); ino = xfs_dir2_sf_get_ino(mp, sfp, sfep);
ASSERT(args->inumber != ino); ASSERT(args->inumber != ino);
dp->d_ops->sf_put_ino(sfp, sfep, args->inumber); xfs_dir2_sf_put_ino(mp, sfp, sfep,
dp->d_ops->sf_put_ftype(sfep, args->filetype); args->inumber);
xfs_dir2_sf_put_ftype(mp, sfep, args->filetype);
break; break;
} }
} }
@ -1076,8 +1160,9 @@ static void
xfs_dir2_sf_toino4( xfs_dir2_sf_toino4(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
char *buf; /* old dir's buffer */ char *buf; /* old dir's buffer */
xfs_inode_t *dp; /* incore directory inode */
int i; /* entry index */ int i; /* entry index */
int newsize; /* new inode size */ int newsize; /* new inode size */
xfs_dir2_sf_entry_t *oldsfep; /* old sf entry */ xfs_dir2_sf_entry_t *oldsfep; /* old sf entry */
@ -1088,8 +1173,6 @@ xfs_dir2_sf_toino4(
trace_xfs_dir2_sf_toino4(args); trace_xfs_dir2_sf_toino4(args);
dp = args->dp;
/* /*
* Copy the old directory to the buffer. * Copy the old directory to the buffer.
* Then nuke it from the inode, and add the new buffer to the inode. * Then nuke it from the inode, and add the new buffer to the inode.
@ -1116,21 +1199,22 @@ xfs_dir2_sf_toino4(
*/ */
sfp->count = oldsfp->count; sfp->count = oldsfp->count;
sfp->i8count = 0; sfp->i8count = 0;
dp->d_ops->sf_put_parent_ino(sfp, dp->d_ops->sf_get_parent_ino(oldsfp)); xfs_dir2_sf_put_parent_ino(sfp, xfs_dir2_sf_get_parent_ino(oldsfp));
/* /*
* Copy the entries field by field. * Copy the entries field by field.
*/ */
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp), for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp),
oldsfep = xfs_dir2_sf_firstentry(oldsfp); oldsfep = xfs_dir2_sf_firstentry(oldsfp);
i < sfp->count; i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep), i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep),
oldsfep = dp->d_ops->sf_nextentry(oldsfp, oldsfep)) { oldsfep = xfs_dir2_sf_nextentry(mp, oldsfp, oldsfep)) {
sfep->namelen = oldsfep->namelen; sfep->namelen = oldsfep->namelen;
memcpy(sfep->offset, oldsfep->offset, sizeof(sfep->offset)); memcpy(sfep->offset, oldsfep->offset, sizeof(sfep->offset));
memcpy(sfep->name, oldsfep->name, sfep->namelen); memcpy(sfep->name, oldsfep->name, sfep->namelen);
dp->d_ops->sf_put_ino(sfp, sfep, xfs_dir2_sf_put_ino(mp, sfp, sfep,
dp->d_ops->sf_get_ino(oldsfp, oldsfep)); xfs_dir2_sf_get_ino(mp, oldsfp, oldsfep));
dp->d_ops->sf_put_ftype(sfep, dp->d_ops->sf_get_ftype(oldsfep)); xfs_dir2_sf_put_ftype(mp, sfep,
xfs_dir2_sf_get_ftype(mp, oldsfep));
} }
/* /*
* Clean up the inode. * Clean up the inode.
@ -1149,8 +1233,9 @@ static void
xfs_dir2_sf_toino8( xfs_dir2_sf_toino8(
xfs_da_args_t *args) /* operation arguments */ xfs_da_args_t *args) /* operation arguments */
{ {
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
char *buf; /* old dir's buffer */ char *buf; /* old dir's buffer */
xfs_inode_t *dp; /* incore directory inode */
int i; /* entry index */ int i; /* entry index */
int newsize; /* new inode size */ int newsize; /* new inode size */
xfs_dir2_sf_entry_t *oldsfep; /* old sf entry */ xfs_dir2_sf_entry_t *oldsfep; /* old sf entry */
@ -1161,8 +1246,6 @@ xfs_dir2_sf_toino8(
trace_xfs_dir2_sf_toino8(args); trace_xfs_dir2_sf_toino8(args);
dp = args->dp;
/* /*
* Copy the old directory to the buffer. * Copy the old directory to the buffer.
* Then nuke it from the inode, and add the new buffer to the inode. * Then nuke it from the inode, and add the new buffer to the inode.
@ -1189,21 +1272,22 @@ xfs_dir2_sf_toino8(
*/ */
sfp->count = oldsfp->count; sfp->count = oldsfp->count;
sfp->i8count = 1; sfp->i8count = 1;
dp->d_ops->sf_put_parent_ino(sfp, dp->d_ops->sf_get_parent_ino(oldsfp)); xfs_dir2_sf_put_parent_ino(sfp, xfs_dir2_sf_get_parent_ino(oldsfp));
/* /*
* Copy the entries field by field. * Copy the entries field by field.
*/ */
for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp), for (i = 0, sfep = xfs_dir2_sf_firstentry(sfp),
oldsfep = xfs_dir2_sf_firstentry(oldsfp); oldsfep = xfs_dir2_sf_firstentry(oldsfp);
i < sfp->count; i < sfp->count;
i++, sfep = dp->d_ops->sf_nextentry(sfp, sfep), i++, sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep),
oldsfep = dp->d_ops->sf_nextentry(oldsfp, oldsfep)) { oldsfep = xfs_dir2_sf_nextentry(mp, oldsfp, oldsfep)) {
sfep->namelen = oldsfep->namelen; sfep->namelen = oldsfep->namelen;
memcpy(sfep->offset, oldsfep->offset, sizeof(sfep->offset)); memcpy(sfep->offset, oldsfep->offset, sizeof(sfep->offset));
memcpy(sfep->name, oldsfep->name, sfep->namelen); memcpy(sfep->name, oldsfep->name, sfep->namelen);
dp->d_ops->sf_put_ino(sfp, sfep, xfs_dir2_sf_put_ino(mp, sfp, sfep,
dp->d_ops->sf_get_ino(oldsfp, oldsfep)); xfs_dir2_sf_get_ino(mp, oldsfp, oldsfep));
dp->d_ops->sf_put_ftype(sfep, dp->d_ops->sf_get_ftype(oldsfep)); xfs_dir2_sf_put_ftype(mp, sfep,
xfs_dir2_sf_get_ftype(mp, oldsfep));
} }
/* /*
* Clean up the inode. * Clean up the inode.

View File

@ -35,10 +35,10 @@ xfs_calc_dquots_per_chunk(
xfs_failaddr_t xfs_failaddr_t
xfs_dquot_verify( xfs_dquot_verify(
struct xfs_mount *mp, struct xfs_mount *mp,
xfs_disk_dquot_t *ddq, struct xfs_disk_dquot *ddq,
xfs_dqid_t id, xfs_dqid_t id,
uint type) /* used only during quotacheck */ uint type) /* used only during quotacheck */
{ {
/* /*
* We can encounter an uninitialized dquot buffer for 2 reasons: * We can encounter an uninitialized dquot buffer for 2 reasons:

View File

@ -920,13 +920,13 @@ static inline uint xfs_dinode_size(int version)
* This enum is used in string mapping in xfs_trace.h; please keep the * This enum is used in string mapping in xfs_trace.h; please keep the
* TRACE_DEFINE_ENUMs for it up to date. * TRACE_DEFINE_ENUMs for it up to date.
*/ */
typedef enum xfs_dinode_fmt { enum xfs_dinode_fmt {
XFS_DINODE_FMT_DEV, /* xfs_dev_t */ XFS_DINODE_FMT_DEV, /* xfs_dev_t */
XFS_DINODE_FMT_LOCAL, /* bulk data */ XFS_DINODE_FMT_LOCAL, /* bulk data */
XFS_DINODE_FMT_EXTENTS, /* struct xfs_bmbt_rec */ XFS_DINODE_FMT_EXTENTS, /* struct xfs_bmbt_rec */
XFS_DINODE_FMT_BTREE, /* struct xfs_bmdr_block */ XFS_DINODE_FMT_BTREE, /* struct xfs_bmdr_block */
XFS_DINODE_FMT_UUID /* added long ago, but never used */ XFS_DINODE_FMT_UUID /* added long ago, but never used */
} xfs_dinode_fmt_t; };
#define XFS_INODE_FORMAT_STR \ #define XFS_INODE_FORMAT_STR \
{ XFS_DINODE_FMT_DEV, "dev" }, \ { XFS_DINODE_FMT_DEV, "dev" }, \
@ -1144,11 +1144,11 @@ static inline void xfs_dinode_put_rdev(struct xfs_dinode *dip, xfs_dev_t rdev)
/* /*
* This is the main portion of the on-disk representation of quota * This is the main portion of the on-disk representation of quota
* information for a user. This is the q_core of the xfs_dquot_t that * information for a user. This is the q_core of the struct xfs_dquot that
* is kept in kernel memory. We pad this with some more expansion room * is kept in kernel memory. We pad this with some more expansion room
* to construct the on disk structure. * to construct the on disk structure.
*/ */
typedef struct xfs_disk_dquot { struct xfs_disk_dquot {
__be16 d_magic; /* dquot magic = XFS_DQUOT_MAGIC */ __be16 d_magic; /* dquot magic = XFS_DQUOT_MAGIC */
__u8 d_version; /* dquot version */ __u8 d_version; /* dquot version */
__u8 d_flags; /* XFS_DQ_USER/PROJ/GROUP */ __u8 d_flags; /* XFS_DQ_USER/PROJ/GROUP */
@ -1171,15 +1171,15 @@ typedef struct xfs_disk_dquot {
__be32 d_rtbtimer; /* similar to above; for RT disk blocks */ __be32 d_rtbtimer; /* similar to above; for RT disk blocks */
__be16 d_rtbwarns; /* warnings issued wrt RT disk blocks */ __be16 d_rtbwarns; /* warnings issued wrt RT disk blocks */
__be16 d_pad; __be16 d_pad;
} xfs_disk_dquot_t; };
/* /*
* This is what goes on disk. This is separated from the xfs_disk_dquot because * This is what goes on disk. This is separated from the xfs_disk_dquot because
* carrying the unnecessary padding would be a waste of memory. * carrying the unnecessary padding would be a waste of memory.
*/ */
typedef struct xfs_dqblk { typedef struct xfs_dqblk {
xfs_disk_dquot_t dd_diskdq; /* portion that lives incore as well */ struct xfs_disk_dquot dd_diskdq; /* portion living incore as well */
char dd_fill[4]; /* filling for posterity */ char dd_fill[4];/* filling for posterity */
/* /*
* These two are only present on filesystems with the CRC bits set. * These two are only present on filesystems with the CRC bits set.

View File

@ -324,7 +324,7 @@ typedef struct xfs_growfs_rt {
* Structures returned from ioctl XFS_IOC_FSBULKSTAT & XFS_IOC_FSBULKSTAT_SINGLE * Structures returned from ioctl XFS_IOC_FSBULKSTAT & XFS_IOC_FSBULKSTAT_SINGLE
*/ */
typedef struct xfs_bstime { typedef struct xfs_bstime {
time_t tv_sec; /* seconds */ __kernel_long_t tv_sec; /* seconds */
__s32 tv_nsec; /* and nanoseconds */ __s32 tv_nsec; /* and nanoseconds */
} xfs_bstime_t; } xfs_bstime_t;
@ -416,7 +416,7 @@ struct xfs_bulkstat {
/* /*
* Project quota id helpers (previously projid was 16bit only * Project quota id helpers (previously projid was 16bit only
* and using two 16bit values to hold new 32bit projid was choosen * and using two 16bit values to hold new 32bit projid was chosen
* to retain compatibility with "old" filesystems). * to retain compatibility with "old" filesystems).
*/ */
static inline uint32_t static inline uint32_t

View File

@ -544,7 +544,10 @@ xfs_inobt_insert_sprec(
nrec->ir_free, &i); nrec->ir_free, &i);
if (error) if (error)
goto error; goto error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error;
}
goto out; goto out;
} }
@ -557,17 +560,23 @@ xfs_inobt_insert_sprec(
error = xfs_inobt_get_rec(cur, &rec, &i); error = xfs_inobt_get_rec(cur, &rec, &i);
if (error) if (error)
goto error; goto error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error); if (XFS_IS_CORRUPT(mp, i != 1)) {
XFS_WANT_CORRUPTED_GOTO(mp, error = -EFSCORRUPTED;
rec.ir_startino == nrec->ir_startino, goto error;
error); }
if (XFS_IS_CORRUPT(mp, rec.ir_startino != nrec->ir_startino)) {
error = -EFSCORRUPTED;
goto error;
}
/* /*
* This should never fail. If we have coexisting records that * This should never fail. If we have coexisting records that
* cannot merge, something is seriously wrong. * cannot merge, something is seriously wrong.
*/ */
XFS_WANT_CORRUPTED_GOTO(mp, __xfs_inobt_can_merge(nrec, &rec), if (XFS_IS_CORRUPT(mp, !__xfs_inobt_can_merge(nrec, &rec))) {
error); error = -EFSCORRUPTED;
goto error;
}
trace_xfs_irec_merge_pre(mp, agno, rec.ir_startino, trace_xfs_irec_merge_pre(mp, agno, rec.ir_startino,
rec.ir_holemask, nrec->ir_startino, rec.ir_holemask, nrec->ir_startino,
@ -1057,7 +1066,8 @@ xfs_ialloc_next_rec(
error = xfs_inobt_get_rec(cur, rec, &i); error = xfs_inobt_get_rec(cur, rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
} }
return 0; return 0;
@ -1081,7 +1091,8 @@ xfs_ialloc_get_rec(
error = xfs_inobt_get_rec(cur, rec, &i); error = xfs_inobt_get_rec(cur, rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
} }
return 0; return 0;
@ -1161,12 +1172,18 @@ xfs_dialloc_ag_inobt(
error = xfs_inobt_lookup(cur, pagino, XFS_LOOKUP_LE, &i); error = xfs_inobt_lookup(cur, pagino, XFS_LOOKUP_LE, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_inobt_get_rec(cur, &rec, &j); error = xfs_inobt_get_rec(cur, &rec, &j);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(mp, j == 1, error0); if (XFS_IS_CORRUPT(mp, j != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
if (rec.ir_freecount > 0) { if (rec.ir_freecount > 0) {
/* /*
@ -1321,19 +1338,28 @@ xfs_dialloc_ag_inobt(
error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &i); error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
for (;;) { for (;;) {
error = xfs_inobt_get_rec(cur, &rec, &i); error = xfs_inobt_get_rec(cur, &rec, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
if (rec.ir_freecount > 0) if (rec.ir_freecount > 0)
break; break;
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto error0; goto error0;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
} }
alloc_inode: alloc_inode:
@ -1393,7 +1419,8 @@ xfs_dialloc_ag_finobt_near(
error = xfs_inobt_get_rec(lcur, rec, &i); error = xfs_inobt_get_rec(lcur, rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(lcur->bc_mp, i == 1); if (XFS_IS_CORRUPT(lcur->bc_mp, i != 1))
return -EFSCORRUPTED;
/* /*
* See if we've landed in the parent inode record. The finobt * See if we've landed in the parent inode record. The finobt
@ -1416,10 +1443,16 @@ xfs_dialloc_ag_finobt_near(
error = xfs_inobt_get_rec(rcur, &rrec, &j); error = xfs_inobt_get_rec(rcur, &rrec, &j);
if (error) if (error)
goto error_rcur; goto error_rcur;
XFS_WANT_CORRUPTED_GOTO(lcur->bc_mp, j == 1, error_rcur); if (XFS_IS_CORRUPT(lcur->bc_mp, j != 1)) {
error = -EFSCORRUPTED;
goto error_rcur;
}
} }
XFS_WANT_CORRUPTED_GOTO(lcur->bc_mp, i == 1 || j == 1, error_rcur); if (XFS_IS_CORRUPT(lcur->bc_mp, i != 1 && j != 1)) {
error = -EFSCORRUPTED;
goto error_rcur;
}
if (i == 1 && j == 1) { if (i == 1 && j == 1) {
/* /*
* Both the left and right records are valid. Choose the closer * Both the left and right records are valid. Choose the closer
@ -1472,7 +1505,8 @@ xfs_dialloc_ag_finobt_newino(
error = xfs_inobt_get_rec(cur, rec, &i); error = xfs_inobt_get_rec(cur, rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
return 0; return 0;
} }
} }
@ -1483,12 +1517,14 @@ xfs_dialloc_ag_finobt_newino(
error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &i); error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
error = xfs_inobt_get_rec(cur, rec, &i); error = xfs_inobt_get_rec(cur, rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
return 0; return 0;
} }
@ -1510,20 +1546,24 @@ xfs_dialloc_ag_update_inobt(
error = xfs_inobt_lookup(cur, frec->ir_startino, XFS_LOOKUP_EQ, &i); error = xfs_inobt_lookup(cur, frec->ir_startino, XFS_LOOKUP_EQ, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
error = xfs_inobt_get_rec(cur, &rec, &i); error = xfs_inobt_get_rec(cur, &rec, &i);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, i == 1); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1))
return -EFSCORRUPTED;
ASSERT((XFS_AGINO_TO_OFFSET(cur->bc_mp, rec.ir_startino) % ASSERT((XFS_AGINO_TO_OFFSET(cur->bc_mp, rec.ir_startino) %
XFS_INODES_PER_CHUNK) == 0); XFS_INODES_PER_CHUNK) == 0);
rec.ir_free &= ~XFS_INOBT_MASK(offset); rec.ir_free &= ~XFS_INOBT_MASK(offset);
rec.ir_freecount--; rec.ir_freecount--;
XFS_WANT_CORRUPTED_RETURN(cur->bc_mp, (rec.ir_free == frec->ir_free) && if (XFS_IS_CORRUPT(cur->bc_mp,
(rec.ir_freecount == frec->ir_freecount)); rec.ir_free != frec->ir_free ||
rec.ir_freecount != frec->ir_freecount))
return -EFSCORRUPTED;
return xfs_inobt_update(cur, &rec); return xfs_inobt_update(cur, &rec);
} }
@ -1933,14 +1973,20 @@ xfs_difree_inobt(
__func__, error); __func__, error);
goto error0; goto error0;
} }
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
error = xfs_inobt_get_rec(cur, &rec, &i); error = xfs_inobt_get_rec(cur, &rec, &i);
if (error) { if (error) {
xfs_warn(mp, "%s: xfs_inobt_get_rec() returned error %d.", xfs_warn(mp, "%s: xfs_inobt_get_rec() returned error %d.",
__func__, error); __func__, error);
goto error0; goto error0;
} }
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error0); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error0;
}
/* /*
* Get the offset in the inode chunk. * Get the offset in the inode chunk.
*/ */
@ -2052,7 +2098,10 @@ xfs_difree_finobt(
* freed an inode in a previously fully allocated chunk. If not, * freed an inode in a previously fully allocated chunk. If not,
* something is out of sync. * something is out of sync.
*/ */
XFS_WANT_CORRUPTED_GOTO(mp, ibtrec->ir_freecount == 1, error); if (XFS_IS_CORRUPT(mp, ibtrec->ir_freecount != 1)) {
error = -EFSCORRUPTED;
goto error;
}
error = xfs_inobt_insert_rec(cur, ibtrec->ir_holemask, error = xfs_inobt_insert_rec(cur, ibtrec->ir_holemask,
ibtrec->ir_count, ibtrec->ir_count,
@ -2075,14 +2124,20 @@ xfs_difree_finobt(
error = xfs_inobt_get_rec(cur, &rec, &i); error = xfs_inobt_get_rec(cur, &rec, &i);
if (error) if (error)
goto error; goto error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto error;
}
rec.ir_free |= XFS_INOBT_MASK(offset); rec.ir_free |= XFS_INOBT_MASK(offset);
rec.ir_freecount++; rec.ir_freecount++;
XFS_WANT_CORRUPTED_GOTO(mp, (rec.ir_free == ibtrec->ir_free) && if (XFS_IS_CORRUPT(mp,
(rec.ir_freecount == ibtrec->ir_freecount), rec.ir_free != ibtrec->ir_free ||
error); rec.ir_freecount != ibtrec->ir_freecount)) {
error = -EFSCORRUPTED;
goto error;
}
/* /*
* The content of inobt records should always match between the inobt * The content of inobt records should always match between the inobt

View File

@ -596,7 +596,7 @@ xfs_iext_realloc_root(
struct xfs_ifork *ifp, struct xfs_ifork *ifp,
struct xfs_iext_cursor *cur) struct xfs_iext_cursor *cur)
{ {
size_t new_size = ifp->if_bytes + sizeof(struct xfs_iext_rec); int64_t new_size = ifp->if_bytes + sizeof(struct xfs_iext_rec);
void *new; void *new;
/* account for the prev/next pointers */ /* account for the prev/next pointers */

View File

@ -213,13 +213,12 @@ xfs_inode_from_disk(
to->di_version = from->di_version; to->di_version = from->di_version;
if (to->di_version == 1) { if (to->di_version == 1) {
set_nlink(inode, be16_to_cpu(from->di_onlink)); set_nlink(inode, be16_to_cpu(from->di_onlink));
to->di_projid_lo = 0; to->di_projid = 0;
to->di_projid_hi = 0;
to->di_version = 2; to->di_version = 2;
} else { } else {
set_nlink(inode, be32_to_cpu(from->di_nlink)); set_nlink(inode, be32_to_cpu(from->di_nlink));
to->di_projid_lo = be16_to_cpu(from->di_projid_lo); to->di_projid = (prid_t)be16_to_cpu(from->di_projid_hi) << 16 |
to->di_projid_hi = be16_to_cpu(from->di_projid_hi); be16_to_cpu(from->di_projid_lo);
} }
to->di_format = from->di_format; to->di_format = from->di_format;
@ -256,8 +255,8 @@ xfs_inode_from_disk(
if (to->di_version == 3) { if (to->di_version == 3) {
inode_set_iversion_queried(inode, inode_set_iversion_queried(inode,
be64_to_cpu(from->di_changecount)); be64_to_cpu(from->di_changecount));
to->di_crtime.t_sec = be32_to_cpu(from->di_crtime.t_sec); to->di_crtime.tv_sec = be32_to_cpu(from->di_crtime.t_sec);
to->di_crtime.t_nsec = be32_to_cpu(from->di_crtime.t_nsec); to->di_crtime.tv_nsec = be32_to_cpu(from->di_crtime.t_nsec);
to->di_flags2 = be64_to_cpu(from->di_flags2); to->di_flags2 = be64_to_cpu(from->di_flags2);
to->di_cowextsize = be32_to_cpu(from->di_cowextsize); to->di_cowextsize = be32_to_cpu(from->di_cowextsize);
} }
@ -279,8 +278,8 @@ xfs_inode_to_disk(
to->di_format = from->di_format; to->di_format = from->di_format;
to->di_uid = cpu_to_be32(from->di_uid); to->di_uid = cpu_to_be32(from->di_uid);
to->di_gid = cpu_to_be32(from->di_gid); to->di_gid = cpu_to_be32(from->di_gid);
to->di_projid_lo = cpu_to_be16(from->di_projid_lo); to->di_projid_lo = cpu_to_be16(from->di_projid & 0xffff);
to->di_projid_hi = cpu_to_be16(from->di_projid_hi); to->di_projid_hi = cpu_to_be16(from->di_projid >> 16);
memset(to->di_pad, 0, sizeof(to->di_pad)); memset(to->di_pad, 0, sizeof(to->di_pad));
to->di_atime.t_sec = cpu_to_be32(inode->i_atime.tv_sec); to->di_atime.t_sec = cpu_to_be32(inode->i_atime.tv_sec);
@ -306,8 +305,8 @@ xfs_inode_to_disk(
if (from->di_version == 3) { if (from->di_version == 3) {
to->di_changecount = cpu_to_be64(inode_peek_iversion(inode)); to->di_changecount = cpu_to_be64(inode_peek_iversion(inode));
to->di_crtime.t_sec = cpu_to_be32(from->di_crtime.t_sec); to->di_crtime.t_sec = cpu_to_be32(from->di_crtime.tv_sec);
to->di_crtime.t_nsec = cpu_to_be32(from->di_crtime.t_nsec); to->di_crtime.t_nsec = cpu_to_be32(from->di_crtime.tv_nsec);
to->di_flags2 = cpu_to_be64(from->di_flags2); to->di_flags2 = cpu_to_be64(from->di_flags2);
to->di_cowextsize = cpu_to_be32(from->di_cowextsize); to->di_cowextsize = cpu_to_be32(from->di_cowextsize);
to->di_ino = cpu_to_be64(ip->i_ino); to->di_ino = cpu_to_be64(ip->i_ino);
@ -632,8 +631,6 @@ xfs_iread(
if ((iget_flags & XFS_IGET_CREATE) && if ((iget_flags & XFS_IGET_CREATE) &&
xfs_sb_version_hascrc(&mp->m_sb) && xfs_sb_version_hascrc(&mp->m_sb) &&
!(mp->m_flags & XFS_MOUNT_IKEEP)) { !(mp->m_flags & XFS_MOUNT_IKEEP)) {
/* initialise the on-disk inode core */
memset(&ip->i_d, 0, sizeof(ip->i_d));
VFS_I(ip)->i_generation = prandom_u32(); VFS_I(ip)->i_generation = prandom_u32();
ip->i_d.di_version = 3; ip->i_d.di_version = 3;
return 0; return 0;

View File

@ -21,8 +21,7 @@ struct xfs_icdinode {
uint16_t di_flushiter; /* incremented on flush */ uint16_t di_flushiter; /* incremented on flush */
uint32_t di_uid; /* owner's user id */ uint32_t di_uid; /* owner's user id */
uint32_t di_gid; /* owner's group id */ uint32_t di_gid; /* owner's group id */
uint16_t di_projid_lo; /* lower part of owner's project id */ uint32_t di_projid; /* owner's project id */
uint16_t di_projid_hi; /* higher part of owner's project id */
xfs_fsize_t di_size; /* number of bytes in file */ xfs_fsize_t di_size; /* number of bytes in file */
xfs_rfsblock_t di_nblocks; /* # of direct & btree blocks used */ xfs_rfsblock_t di_nblocks; /* # of direct & btree blocks used */
xfs_extlen_t di_extsize; /* basic/minimum extent size for file */ xfs_extlen_t di_extsize; /* basic/minimum extent size for file */
@ -37,7 +36,7 @@ struct xfs_icdinode {
uint64_t di_flags2; /* more random flags */ uint64_t di_flags2; /* more random flags */
uint32_t di_cowextsize; /* basic cow extent size for file */ uint32_t di_cowextsize; /* basic cow extent size for file */
xfs_ictimestamp_t di_crtime; /* time created */ struct timespec64 di_crtime; /* time created */
}; };
/* /*

View File

@ -75,11 +75,15 @@ xfs_iformat_fork(
error = xfs_iformat_btree(ip, dip, XFS_DATA_FORK); error = xfs_iformat_btree(ip, dip, XFS_DATA_FORK);
break; break;
default: default:
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__,
dip, sizeof(*dip), __this_address);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
break; break;
default: default:
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__, dip,
sizeof(*dip), __this_address);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
if (error) if (error)
@ -110,14 +114,16 @@ xfs_iformat_fork(
error = xfs_iformat_btree(ip, dip, XFS_ATTR_FORK); error = xfs_iformat_btree(ip, dip, XFS_ATTR_FORK);
break; break;
default: default:
xfs_inode_verifier_error(ip, error, __func__, dip,
sizeof(*dip), __this_address);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
break; break;
} }
if (error) { if (error) {
kmem_zone_free(xfs_ifork_zone, ip->i_afp); kmem_cache_free(xfs_ifork_zone, ip->i_afp);
ip->i_afp = NULL; ip->i_afp = NULL;
if (ip->i_cowfp) if (ip->i_cowfp)
kmem_zone_free(xfs_ifork_zone, ip->i_cowfp); kmem_cache_free(xfs_ifork_zone, ip->i_cowfp);
ip->i_cowfp = NULL; ip->i_cowfp = NULL;
xfs_idestroy_fork(ip, XFS_DATA_FORK); xfs_idestroy_fork(ip, XFS_DATA_FORK);
} }
@ -129,7 +135,7 @@ xfs_init_local_fork(
struct xfs_inode *ip, struct xfs_inode *ip,
int whichfork, int whichfork,
const void *data, const void *data,
int size) int64_t size)
{ {
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
int mem_size = size, real_size = 0; int mem_size = size, real_size = 0;
@ -467,11 +473,11 @@ xfs_iroot_realloc(
void void
xfs_idata_realloc( xfs_idata_realloc(
struct xfs_inode *ip, struct xfs_inode *ip,
int byte_diff, int64_t byte_diff,
int whichfork) int whichfork)
{ {
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
int new_size = (int)ifp->if_bytes + byte_diff; int64_t new_size = ifp->if_bytes + byte_diff;
ASSERT(new_size >= 0); ASSERT(new_size >= 0);
ASSERT(new_size <= XFS_IFORK_SIZE(ip, whichfork)); ASSERT(new_size <= XFS_IFORK_SIZE(ip, whichfork));
@ -525,10 +531,10 @@ xfs_idestroy_fork(
} }
if (whichfork == XFS_ATTR_FORK) { if (whichfork == XFS_ATTR_FORK) {
kmem_zone_free(xfs_ifork_zone, ip->i_afp); kmem_cache_free(xfs_ifork_zone, ip->i_afp);
ip->i_afp = NULL; ip->i_afp = NULL;
} else if (whichfork == XFS_COW_FORK) { } else if (whichfork == XFS_COW_FORK) {
kmem_zone_free(xfs_ifork_zone, ip->i_cowfp); kmem_cache_free(xfs_ifork_zone, ip->i_cowfp);
ip->i_cowfp = NULL; ip->i_cowfp = NULL;
} }
} }
@ -552,7 +558,7 @@ xfs_iextents_copy(
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
struct xfs_iext_cursor icur; struct xfs_iext_cursor icur;
struct xfs_bmbt_irec rec; struct xfs_bmbt_irec rec;
int copied = 0; int64_t copied = 0;
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED)); ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED));
ASSERT(ifp->if_bytes > 0); ASSERT(ifp->if_bytes > 0);

View File

@ -13,16 +13,16 @@ struct xfs_dinode;
* File incore extent information, present for each of data & attr forks. * File incore extent information, present for each of data & attr forks.
*/ */
struct xfs_ifork { struct xfs_ifork {
int if_bytes; /* bytes in if_u1 */ int64_t if_bytes; /* bytes in if_u1 */
unsigned int if_seq; /* fork mod counter */
struct xfs_btree_block *if_broot; /* file's incore btree root */ struct xfs_btree_block *if_broot; /* file's incore btree root */
short if_broot_bytes; /* bytes allocated for root */ unsigned int if_seq; /* fork mod counter */
unsigned char if_flags; /* per-fork flags */
int if_height; /* height of the extent tree */ int if_height; /* height of the extent tree */
union { union {
void *if_root; /* extent tree root */ void *if_root; /* extent tree root */
char *if_data; /* inline file data */ char *if_data; /* inline file data */
} if_u1; } if_u1;
short if_broot_bytes; /* bytes allocated for root */
unsigned char if_flags; /* per-fork flags */
}; };
/* /*
@ -87,18 +87,24 @@ struct xfs_ifork {
#define XFS_IFORK_MAXEXT(ip, w) \ #define XFS_IFORK_MAXEXT(ip, w) \
(XFS_IFORK_SIZE(ip, w) / sizeof(xfs_bmbt_rec_t)) (XFS_IFORK_SIZE(ip, w) / sizeof(xfs_bmbt_rec_t))
#define xfs_ifork_has_extents(ip, w) \
(XFS_IFORK_FORMAT((ip), (w)) == XFS_DINODE_FMT_EXTENTS || \
XFS_IFORK_FORMAT((ip), (w)) == XFS_DINODE_FMT_BTREE)
struct xfs_ifork *xfs_iext_state_to_fork(struct xfs_inode *ip, int state); struct xfs_ifork *xfs_iext_state_to_fork(struct xfs_inode *ip, int state);
int xfs_iformat_fork(struct xfs_inode *, struct xfs_dinode *); int xfs_iformat_fork(struct xfs_inode *, struct xfs_dinode *);
void xfs_iflush_fork(struct xfs_inode *, struct xfs_dinode *, void xfs_iflush_fork(struct xfs_inode *, struct xfs_dinode *,
struct xfs_inode_log_item *, int); struct xfs_inode_log_item *, int);
void xfs_idestroy_fork(struct xfs_inode *, int); void xfs_idestroy_fork(struct xfs_inode *, int);
void xfs_idata_realloc(struct xfs_inode *, int, int); void xfs_idata_realloc(struct xfs_inode *ip, int64_t byte_diff,
int whichfork);
void xfs_iroot_realloc(struct xfs_inode *, int, int); void xfs_iroot_realloc(struct xfs_inode *, int, int);
int xfs_iread_extents(struct xfs_trans *, struct xfs_inode *, int); int xfs_iread_extents(struct xfs_trans *, struct xfs_inode *, int);
int xfs_iextents_copy(struct xfs_inode *, struct xfs_bmbt_rec *, int xfs_iextents_copy(struct xfs_inode *, struct xfs_bmbt_rec *,
int); int);
void xfs_init_local_fork(struct xfs_inode *, int, const void *, int); void xfs_init_local_fork(struct xfs_inode *ip, int whichfork,
const void *data, int64_t size);
xfs_extnum_t xfs_iext_count(struct xfs_ifork *ifp); xfs_extnum_t xfs_iext_count(struct xfs_ifork *ifp);
void xfs_iext_insert(struct xfs_inode *, struct xfs_iext_cursor *cur, void xfs_iext_insert(struct xfs_inode *, struct xfs_iext_cursor *cur,

View File

@ -432,9 +432,9 @@ static inline uint xfs_log_dinode_size(int version)
} }
/* /*
* Buffer Log Format defintions * Buffer Log Format definitions
* *
* These are the physical dirty bitmap defintions for the log format structure. * These are the physical dirty bitmap definitions for the log format structure.
*/ */
#define XFS_BLF_CHUNK 128 #define XFS_BLF_CHUNK 128
#define XFS_BLF_SHIFT 7 #define XFS_BLF_SHIFT 7

View File

@ -30,14 +30,14 @@ typedef struct xlog_recover_item {
xfs_log_iovec_t *ri_buf; /* ptr to regions buffer */ xfs_log_iovec_t *ri_buf; /* ptr to regions buffer */
} xlog_recover_item_t; } xlog_recover_item_t;
typedef struct xlog_recover { struct xlog_recover {
struct hlist_node r_list; struct hlist_node r_list;
xlog_tid_t r_log_tid; /* log's transaction id */ xlog_tid_t r_log_tid; /* log's transaction id */
xfs_trans_header_t r_theader; /* trans header for partial */ xfs_trans_header_t r_theader; /* trans header for partial */
int r_state; /* not needed */ int r_state; /* not needed */
xfs_lsn_t r_lsn; /* xact lsn */ xfs_lsn_t r_lsn; /* xact lsn */
struct list_head r_itemq; /* q for items */ struct list_head r_itemq; /* q for items */
} xlog_recover_t; };
#define ITEM_TYPE(i) (*(unsigned short *)(i)->ri_buf[0].i_addr) #define ITEM_TYPE(i) (*(unsigned short *)(i)->ri_buf[0].i_addr)

View File

@ -200,7 +200,10 @@ xfs_refcount_insert(
error = xfs_btree_insert(cur, i); error = xfs_btree_insert(cur, i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, *i == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, *i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
out_error: out_error:
if (error) if (error)
@ -227,10 +230,16 @@ xfs_refcount_delete(
error = xfs_refcount_get_rec(cur, &irec, &found_rec); error = xfs_refcount_get_rec(cur, &irec, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
trace_xfs_refcount_delete(cur->bc_mp, cur->bc_private.a.agno, &irec); trace_xfs_refcount_delete(cur->bc_mp, cur->bc_private.a.agno, &irec);
error = xfs_btree_delete(cur, i); error = xfs_btree_delete(cur, i);
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, *i == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, *i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (error) if (error)
goto out_error; goto out_error;
error = xfs_refcount_lookup_ge(cur, irec.rc_startblock, &found_rec); error = xfs_refcount_lookup_ge(cur, irec.rc_startblock, &found_rec);
@ -349,7 +358,10 @@ xfs_refcount_split_extent(
error = xfs_refcount_get_rec(cur, &rcext, &found_rec); error = xfs_refcount_get_rec(cur, &rcext, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (rcext.rc_startblock == agbno || xfs_refc_next(&rcext) <= agbno) if (rcext.rc_startblock == agbno || xfs_refc_next(&rcext) <= agbno)
return 0; return 0;
@ -371,7 +383,10 @@ xfs_refcount_split_extent(
error = xfs_refcount_insert(cur, &tmp, &found_rec); error = xfs_refcount_insert(cur, &tmp, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
return error; return error;
out_error: out_error:
@ -410,19 +425,27 @@ xfs_refcount_merge_center_extents(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (center->rc_refcount > 1) { if (center->rc_refcount > 1) {
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
} }
/* Enlarge the left extent. */ /* Enlarge the left extent. */
@ -430,7 +453,10 @@ xfs_refcount_merge_center_extents(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
left->rc_blockcount = extlen; left->rc_blockcount = extlen;
error = xfs_refcount_update(cur, left); error = xfs_refcount_update(cur, left);
@ -469,14 +495,18 @@ xfs_refcount_merge_left_extent(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
} }
/* Enlarge the left extent. */ /* Enlarge the left extent. */
@ -484,7 +514,10 @@ xfs_refcount_merge_left_extent(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
left->rc_blockcount += cleft->rc_blockcount; left->rc_blockcount += cleft->rc_blockcount;
error = xfs_refcount_update(cur, left); error = xfs_refcount_update(cur, left);
@ -526,14 +559,18 @@ xfs_refcount_merge_right_extent(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
} }
/* Enlarge the right extent. */ /* Enlarge the right extent. */
@ -541,7 +578,10 @@ xfs_refcount_merge_right_extent(
&found_rec); &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
right->rc_startblock -= cright->rc_blockcount; right->rc_startblock -= cright->rc_blockcount;
right->rc_blockcount += cright->rc_blockcount; right->rc_blockcount += cright->rc_blockcount;
@ -587,7 +627,10 @@ xfs_refcount_find_left_extents(
error = xfs_refcount_get_rec(cur, &tmp, &found_rec); error = xfs_refcount_get_rec(cur, &tmp, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (xfs_refc_next(&tmp) != agbno) if (xfs_refc_next(&tmp) != agbno)
return 0; return 0;
@ -605,8 +648,10 @@ xfs_refcount_find_left_extents(
error = xfs_refcount_get_rec(cur, &tmp, &found_rec); error = xfs_refcount_get_rec(cur, &tmp, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
/* if tmp starts at the end of our range, just use that */ /* if tmp starts at the end of our range, just use that */
if (tmp.rc_startblock == agbno) if (tmp.rc_startblock == agbno)
@ -671,7 +716,10 @@ xfs_refcount_find_right_extents(
error = xfs_refcount_get_rec(cur, &tmp, &found_rec); error = xfs_refcount_get_rec(cur, &tmp, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (tmp.rc_startblock != agbno + aglen) if (tmp.rc_startblock != agbno + aglen)
return 0; return 0;
@ -689,8 +737,10 @@ xfs_refcount_find_right_extents(
error = xfs_refcount_get_rec(cur, &tmp, &found_rec); error = xfs_refcount_get_rec(cur, &tmp, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, found_rec == 1, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
out_error); error = -EFSCORRUPTED;
goto out_error;
}
/* if tmp ends at the end of our range, just use that */ /* if tmp ends at the end of our range, just use that */
if (xfs_refc_next(&tmp) == agbno + aglen) if (xfs_refc_next(&tmp) == agbno + aglen)
@ -913,8 +963,11 @@ xfs_refcount_adjust_extents(
&found_tmp); &found_tmp);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp,
found_tmp == 1, out_error); found_tmp != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
cur->bc_private.a.priv.refc.nr_ops++; cur->bc_private.a.priv.refc.nr_ops++;
} else { } else {
fsbno = XFS_AGB_TO_FSB(cur->bc_mp, fsbno = XFS_AGB_TO_FSB(cur->bc_mp,
@ -955,8 +1008,10 @@ xfs_refcount_adjust_extents(
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
found_rec == 1, out_error); error = -EFSCORRUPTED;
goto out_error;
}
cur->bc_private.a.priv.refc.nr_ops++; cur->bc_private.a.priv.refc.nr_ops++;
goto advloop; goto advloop;
} else { } else {
@ -1122,7 +1177,7 @@ xfs_refcount_finish_one(
XFS_ALLOC_FLAG_FREEING, &agbp); XFS_ALLOC_FLAG_FREEING, &agbp);
if (error) if (error)
return error; return error;
if (!agbp) if (XFS_IS_CORRUPT(tp->t_mountp, !agbp))
return -EFSCORRUPTED; return -EFSCORRUPTED;
rcur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno); rcur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno);
@ -1272,7 +1327,10 @@ xfs_refcount_find_shared(
error = xfs_refcount_get_rec(cur, &tmp, &i); error = xfs_refcount_get_rec(cur, &tmp, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
/* If the extent ends before the start, look at the next one */ /* If the extent ends before the start, look at the next one */
if (tmp.rc_startblock + tmp.rc_blockcount <= agbno) { if (tmp.rc_startblock + tmp.rc_blockcount <= agbno) {
@ -1284,7 +1342,10 @@ xfs_refcount_find_shared(
error = xfs_refcount_get_rec(cur, &tmp, &i); error = xfs_refcount_get_rec(cur, &tmp, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
} }
/* If the extent starts after the range we want, bail out */ /* If the extent starts after the range we want, bail out */
@ -1312,7 +1373,10 @@ xfs_refcount_find_shared(
error = xfs_refcount_get_rec(cur, &tmp, &i); error = xfs_refcount_get_rec(cur, &tmp, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, i == 1, out_error); if (XFS_IS_CORRUPT(cur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (tmp.rc_startblock >= agbno + aglen || if (tmp.rc_startblock >= agbno + aglen ||
tmp.rc_startblock != *fbno + *flen) tmp.rc_startblock != *fbno + *flen)
break; break;
@ -1413,8 +1477,11 @@ xfs_refcount_adjust_cow_extents(
switch (adj) { switch (adj) {
case XFS_REFCOUNT_ADJUST_COW_ALLOC: case XFS_REFCOUNT_ADJUST_COW_ALLOC:
/* Adding a CoW reservation, there should be nothing here. */ /* Adding a CoW reservation, there should be nothing here. */
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp,
ext.rc_startblock >= agbno + aglen, out_error); agbno + aglen > ext.rc_startblock)) {
error = -EFSCORRUPTED;
goto out_error;
}
tmp.rc_startblock = agbno; tmp.rc_startblock = agbno;
tmp.rc_blockcount = aglen; tmp.rc_blockcount = aglen;
@ -1426,17 +1493,25 @@ xfs_refcount_adjust_cow_extents(
&found_tmp); &found_tmp);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp, found_tmp != 1)) {
found_tmp == 1, out_error); error = -EFSCORRUPTED;
goto out_error;
}
break; break;
case XFS_REFCOUNT_ADJUST_COW_FREE: case XFS_REFCOUNT_ADJUST_COW_FREE:
/* Removing a CoW reservation, there should be one extent. */ /* Removing a CoW reservation, there should be one extent. */
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp, ext.rc_startblock != agbno)) {
ext.rc_startblock == agbno, out_error); error = -EFSCORRUPTED;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, goto out_error;
ext.rc_blockcount == aglen, out_error); }
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp, ext.rc_blockcount != aglen)) {
ext.rc_refcount == 1, out_error); error = -EFSCORRUPTED;
goto out_error;
}
if (XFS_IS_CORRUPT(cur->bc_mp, ext.rc_refcount != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
ext.rc_refcount = 0; ext.rc_refcount = 0;
trace_xfs_refcount_modify_extent(cur->bc_mp, trace_xfs_refcount_modify_extent(cur->bc_mp,
@ -1444,8 +1519,10 @@ xfs_refcount_adjust_cow_extents(
error = xfs_refcount_delete(cur, &found_rec); error = xfs_refcount_delete(cur, &found_rec);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(cur->bc_mp, if (XFS_IS_CORRUPT(cur->bc_mp, found_rec != 1)) {
found_rec == 1, out_error); error = -EFSCORRUPTED;
goto out_error;
}
break; break;
default: default:
ASSERT(0); ASSERT(0);
@ -1584,14 +1661,15 @@ struct xfs_refcount_recovery {
/* Stuff an extent on the recovery list. */ /* Stuff an extent on the recovery list. */
STATIC int STATIC int
xfs_refcount_recover_extent( xfs_refcount_recover_extent(
struct xfs_btree_cur *cur, struct xfs_btree_cur *cur,
union xfs_btree_rec *rec, union xfs_btree_rec *rec,
void *priv) void *priv)
{ {
struct list_head *debris = priv; struct list_head *debris = priv;
struct xfs_refcount_recovery *rr; struct xfs_refcount_recovery *rr;
if (be32_to_cpu(rec->refc.rc_refcount) != 1) if (XFS_IS_CORRUPT(cur->bc_mp,
be32_to_cpu(rec->refc.rc_refcount) != 1))
return -EFSCORRUPTED; return -EFSCORRUPTED;
rr = kmem_alloc(sizeof(struct xfs_refcount_recovery), 0); rr = kmem_alloc(sizeof(struct xfs_refcount_recovery), 0);

View File

@ -113,7 +113,10 @@ xfs_rmap_insert(
error = xfs_rmap_lookup_eq(rcur, agbno, len, owner, offset, flags, &i); error = xfs_rmap_lookup_eq(rcur, agbno, len, owner, offset, flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(rcur->bc_mp, i == 0, done); if (XFS_IS_CORRUPT(rcur->bc_mp, i != 0)) {
error = -EFSCORRUPTED;
goto done;
}
rcur->bc_rec.r.rm_startblock = agbno; rcur->bc_rec.r.rm_startblock = agbno;
rcur->bc_rec.r.rm_blockcount = len; rcur->bc_rec.r.rm_blockcount = len;
@ -123,7 +126,10 @@ xfs_rmap_insert(
error = xfs_btree_insert(rcur, &i); error = xfs_btree_insert(rcur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(rcur->bc_mp, i == 1, done); if (XFS_IS_CORRUPT(rcur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
done: done:
if (error) if (error)
trace_xfs_rmap_insert_error(rcur->bc_mp, trace_xfs_rmap_insert_error(rcur->bc_mp,
@ -149,12 +155,18 @@ xfs_rmap_delete(
error = xfs_rmap_lookup_eq(rcur, agbno, len, owner, offset, flags, &i); error = xfs_rmap_lookup_eq(rcur, agbno, len, owner, offset, flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(rcur->bc_mp, i == 1, done); if (XFS_IS_CORRUPT(rcur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_delete(rcur, &i); error = xfs_btree_delete(rcur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(rcur->bc_mp, i == 1, done); if (XFS_IS_CORRUPT(rcur->bc_mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
done: done:
if (error) if (error)
trace_xfs_rmap_delete_error(rcur->bc_mp, trace_xfs_rmap_delete_error(rcur->bc_mp,
@ -406,24 +418,39 @@ xfs_rmap_free_check_owner(
return 0; return 0;
/* Make sure the unwritten flag matches. */ /* Make sure the unwritten flag matches. */
XFS_WANT_CORRUPTED_GOTO(mp, (flags & XFS_RMAP_UNWRITTEN) == if (XFS_IS_CORRUPT(mp,
(rec->rm_flags & XFS_RMAP_UNWRITTEN), out); (flags & XFS_RMAP_UNWRITTEN) !=
(rec->rm_flags & XFS_RMAP_UNWRITTEN))) {
error = -EFSCORRUPTED;
goto out;
}
/* Make sure the owner matches what we expect to find in the tree. */ /* Make sure the owner matches what we expect to find in the tree. */
XFS_WANT_CORRUPTED_GOTO(mp, owner == rec->rm_owner, out); if (XFS_IS_CORRUPT(mp, owner != rec->rm_owner)) {
error = -EFSCORRUPTED;
goto out;
}
/* Check the offset, if necessary. */ /* Check the offset, if necessary. */
if (XFS_RMAP_NON_INODE_OWNER(owner)) if (XFS_RMAP_NON_INODE_OWNER(owner))
goto out; goto out;
if (flags & XFS_RMAP_BMBT_BLOCK) { if (flags & XFS_RMAP_BMBT_BLOCK) {
XFS_WANT_CORRUPTED_GOTO(mp, rec->rm_flags & XFS_RMAP_BMBT_BLOCK, if (XFS_IS_CORRUPT(mp,
out); !(rec->rm_flags & XFS_RMAP_BMBT_BLOCK))) {
error = -EFSCORRUPTED;
goto out;
}
} else { } else {
XFS_WANT_CORRUPTED_GOTO(mp, rec->rm_offset <= offset, out); if (XFS_IS_CORRUPT(mp, rec->rm_offset > offset)) {
XFS_WANT_CORRUPTED_GOTO(mp, error = -EFSCORRUPTED;
ltoff + rec->rm_blockcount >= offset + len, goto out;
out); }
if (XFS_IS_CORRUPT(mp,
offset + len > ltoff + rec->rm_blockcount)) {
error = -EFSCORRUPTED;
goto out;
}
} }
out: out:
@ -482,12 +509,18 @@ xfs_rmap_unmap(
error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, flags, &i); error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, flags, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
error = xfs_rmap_get_rec(cur, &ltrec, &i); error = xfs_rmap_get_rec(cur, &ltrec, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
trace_xfs_rmap_lookup_le_range_result(cur->bc_mp, trace_xfs_rmap_lookup_le_range_result(cur->bc_mp,
cur->bc_private.a.agno, ltrec.rm_startblock, cur->bc_private.a.agno, ltrec.rm_startblock,
ltrec.rm_blockcount, ltrec.rm_owner, ltrec.rm_blockcount, ltrec.rm_owner,
@ -502,8 +535,12 @@ xfs_rmap_unmap(
* be the case that the "left" extent goes all the way to EOFS. * be the case that the "left" extent goes all the way to EOFS.
*/ */
if (owner == XFS_RMAP_OWN_NULL) { if (owner == XFS_RMAP_OWN_NULL) {
XFS_WANT_CORRUPTED_GOTO(mp, bno >= ltrec.rm_startblock + if (XFS_IS_CORRUPT(mp,
ltrec.rm_blockcount, out_error); bno <
ltrec.rm_startblock + ltrec.rm_blockcount)) {
error = -EFSCORRUPTED;
goto out_error;
}
goto out_done; goto out_done;
} }
@ -526,15 +563,22 @@ xfs_rmap_unmap(
error = xfs_rmap_get_rec(cur, &rtrec, &i); error = xfs_rmap_get_rec(cur, &rtrec, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (rtrec.rm_startblock >= bno + len) if (rtrec.rm_startblock >= bno + len)
goto out_done; goto out_done;
} }
/* Make sure the extent we found covers the entire freeing range. */ /* Make sure the extent we found covers the entire freeing range. */
XFS_WANT_CORRUPTED_GOTO(mp, ltrec.rm_startblock <= bno && if (XFS_IS_CORRUPT(mp,
ltrec.rm_startblock + ltrec.rm_blockcount >= ltrec.rm_startblock > bno ||
bno + len, out_error); ltrec.rm_startblock + ltrec.rm_blockcount <
bno + len)) {
error = -EFSCORRUPTED;
goto out_error;
}
/* Check owner information. */ /* Check owner information. */
error = xfs_rmap_free_check_owner(mp, ltoff, &ltrec, len, owner, error = xfs_rmap_free_check_owner(mp, ltoff, &ltrec, len, owner,
@ -551,7 +595,10 @@ xfs_rmap_unmap(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
} else if (ltrec.rm_startblock == bno) { } else if (ltrec.rm_startblock == bno) {
/* /*
* overlap left hand side of extent: move the start, trim the * overlap left hand side of extent: move the start, trim the
@ -743,7 +790,10 @@ xfs_rmap_map(
error = xfs_rmap_get_rec(cur, &ltrec, &have_lt); error = xfs_rmap_get_rec(cur, &ltrec, &have_lt);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_lt == 1, out_error); if (XFS_IS_CORRUPT(mp, have_lt != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
trace_xfs_rmap_lookup_le_range_result(cur->bc_mp, trace_xfs_rmap_lookup_le_range_result(cur->bc_mp,
cur->bc_private.a.agno, ltrec.rm_startblock, cur->bc_private.a.agno, ltrec.rm_startblock,
ltrec.rm_blockcount, ltrec.rm_owner, ltrec.rm_blockcount, ltrec.rm_owner,
@ -753,9 +803,12 @@ xfs_rmap_map(
have_lt = 0; have_lt = 0;
} }
XFS_WANT_CORRUPTED_GOTO(mp, if (XFS_IS_CORRUPT(mp,
have_lt == 0 || have_lt != 0 &&
ltrec.rm_startblock + ltrec.rm_blockcount <= bno, out_error); ltrec.rm_startblock + ltrec.rm_blockcount > bno)) {
error = -EFSCORRUPTED;
goto out_error;
}
/* /*
* Increment the cursor to see if we have a right-adjacent record to our * Increment the cursor to see if we have a right-adjacent record to our
@ -769,9 +822,14 @@ xfs_rmap_map(
error = xfs_rmap_get_rec(cur, &gtrec, &have_gt); error = xfs_rmap_get_rec(cur, &gtrec, &have_gt);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_gt == 1, out_error); if (XFS_IS_CORRUPT(mp, have_gt != 1)) {
XFS_WANT_CORRUPTED_GOTO(mp, bno + len <= gtrec.rm_startblock, error = -EFSCORRUPTED;
out_error); goto out_error;
}
if (XFS_IS_CORRUPT(mp, bno + len > gtrec.rm_startblock)) {
error = -EFSCORRUPTED;
goto out_error;
}
trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp, trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp,
cur->bc_private.a.agno, gtrec.rm_startblock, cur->bc_private.a.agno, gtrec.rm_startblock,
gtrec.rm_blockcount, gtrec.rm_owner, gtrec.rm_blockcount, gtrec.rm_owner,
@ -821,7 +879,10 @@ xfs_rmap_map(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
} }
/* point the cursor back to the left record and update */ /* point the cursor back to the left record and update */
@ -865,7 +926,10 @@ xfs_rmap_map(
error = xfs_btree_insert(cur, &i); error = xfs_btree_insert(cur, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
} }
trace_xfs_rmap_map_done(mp, cur->bc_private.a.agno, bno, len, trace_xfs_rmap_map_done(mp, cur->bc_private.a.agno, bno, len,
@ -957,12 +1021,18 @@ xfs_rmap_convert(
error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, oldext, &i); error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, oldext, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_rmap_get_rec(cur, &PREV, &i); error = xfs_rmap_get_rec(cur, &PREV, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_lookup_le_range_result(cur->bc_mp, trace_xfs_rmap_lookup_le_range_result(cur->bc_mp,
cur->bc_private.a.agno, PREV.rm_startblock, cur->bc_private.a.agno, PREV.rm_startblock,
PREV.rm_blockcount, PREV.rm_owner, PREV.rm_blockcount, PREV.rm_owner,
@ -995,10 +1065,16 @@ xfs_rmap_convert(
error = xfs_rmap_get_rec(cur, &LEFT, &i); error = xfs_rmap_get_rec(cur, &LEFT, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
XFS_WANT_CORRUPTED_GOTO(mp, error = -EFSCORRUPTED;
LEFT.rm_startblock + LEFT.rm_blockcount <= bno, goto done;
done); }
if (XFS_IS_CORRUPT(mp,
LEFT.rm_startblock + LEFT.rm_blockcount >
bno)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_find_left_neighbor_result(cur->bc_mp, trace_xfs_rmap_find_left_neighbor_result(cur->bc_mp,
cur->bc_private.a.agno, LEFT.rm_startblock, cur->bc_private.a.agno, LEFT.rm_startblock,
LEFT.rm_blockcount, LEFT.rm_owner, LEFT.rm_blockcount, LEFT.rm_owner,
@ -1017,7 +1093,10 @@ xfs_rmap_convert(
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto done; goto done;
@ -1026,9 +1105,14 @@ xfs_rmap_convert(
error = xfs_rmap_get_rec(cur, &RIGHT, &i); error = xfs_rmap_get_rec(cur, &RIGHT, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
XFS_WANT_CORRUPTED_GOTO(mp, bno + len <= RIGHT.rm_startblock, error = -EFSCORRUPTED;
done); goto done;
}
if (XFS_IS_CORRUPT(mp, bno + len > RIGHT.rm_startblock)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp, trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp,
cur->bc_private.a.agno, RIGHT.rm_startblock, cur->bc_private.a.agno, RIGHT.rm_startblock,
RIGHT.rm_blockcount, RIGHT.rm_owner, RIGHT.rm_blockcount, RIGHT.rm_owner,
@ -1055,7 +1139,10 @@ xfs_rmap_convert(
error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, oldext, &i); error = xfs_rmap_lookup_le(cur, bno, len, owner, offset, oldext, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
/* /*
* Switch out based on the FILLING and CONTIG state bits. * Switch out based on the FILLING and CONTIG state bits.
@ -1071,7 +1158,10 @@ xfs_rmap_convert(
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_delete(mp, cur->bc_private.a.agno, trace_xfs_rmap_delete(mp, cur->bc_private.a.agno,
RIGHT.rm_startblock, RIGHT.rm_blockcount, RIGHT.rm_startblock, RIGHT.rm_blockcount,
RIGHT.rm_owner, RIGHT.rm_offset, RIGHT.rm_owner, RIGHT.rm_offset,
@ -1079,11 +1169,17 @@ xfs_rmap_convert(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_decrement(cur, 0, &i); error = xfs_btree_decrement(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_delete(mp, cur->bc_private.a.agno, trace_xfs_rmap_delete(mp, cur->bc_private.a.agno,
PREV.rm_startblock, PREV.rm_blockcount, PREV.rm_startblock, PREV.rm_blockcount,
PREV.rm_owner, PREV.rm_offset, PREV.rm_owner, PREV.rm_offset,
@ -1091,11 +1187,17 @@ xfs_rmap_convert(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_decrement(cur, 0, &i); error = xfs_btree_decrement(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW = LEFT; NEW = LEFT;
NEW.rm_blockcount += PREV.rm_blockcount + RIGHT.rm_blockcount; NEW.rm_blockcount += PREV.rm_blockcount + RIGHT.rm_blockcount;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
@ -1115,11 +1217,17 @@ xfs_rmap_convert(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_decrement(cur, 0, &i); error = xfs_btree_decrement(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW = LEFT; NEW = LEFT;
NEW.rm_blockcount += PREV.rm_blockcount; NEW.rm_blockcount += PREV.rm_blockcount;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
@ -1135,7 +1243,10 @@ xfs_rmap_convert(
error = xfs_btree_increment(cur, 0, &i); error = xfs_btree_increment(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_delete(mp, cur->bc_private.a.agno, trace_xfs_rmap_delete(mp, cur->bc_private.a.agno,
RIGHT.rm_startblock, RIGHT.rm_blockcount, RIGHT.rm_startblock, RIGHT.rm_blockcount,
RIGHT.rm_owner, RIGHT.rm_offset, RIGHT.rm_owner, RIGHT.rm_offset,
@ -1143,11 +1254,17 @@ xfs_rmap_convert(
error = xfs_btree_delete(cur, &i); error = xfs_btree_delete(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
error = xfs_btree_decrement(cur, 0, &i); error = xfs_btree_decrement(cur, 0, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW = PREV; NEW = PREV;
NEW.rm_blockcount = len + RIGHT.rm_blockcount; NEW.rm_blockcount = len + RIGHT.rm_blockcount;
NEW.rm_flags = newext; NEW.rm_flags = newext;
@ -1214,7 +1331,10 @@ xfs_rmap_convert(
error = xfs_btree_insert(cur, &i); error = xfs_btree_insert(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
break; break;
case RMAP_RIGHT_FILLING | RMAP_RIGHT_CONTIG: case RMAP_RIGHT_FILLING | RMAP_RIGHT_CONTIG:
@ -1253,7 +1373,10 @@ xfs_rmap_convert(
oldext, &i); oldext, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 0, done); if (XFS_IS_CORRUPT(mp, i != 0)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_startblock = bno; NEW.rm_startblock = bno;
NEW.rm_owner = owner; NEW.rm_owner = owner;
NEW.rm_offset = offset; NEW.rm_offset = offset;
@ -1265,7 +1388,10 @@ xfs_rmap_convert(
error = xfs_btree_insert(cur, &i); error = xfs_btree_insert(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
break; break;
case 0: case 0:
@ -1295,7 +1421,10 @@ xfs_rmap_convert(
error = xfs_btree_insert(cur, &i); error = xfs_btree_insert(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
/* /*
* Reset the cursor to the position of the new extent * Reset the cursor to the position of the new extent
* we are about to insert as we can't trust it after * we are about to insert as we can't trust it after
@ -1305,7 +1434,10 @@ xfs_rmap_convert(
oldext, &i); oldext, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 0, done); if (XFS_IS_CORRUPT(mp, i != 0)) {
error = -EFSCORRUPTED;
goto done;
}
/* new middle extent - newext */ /* new middle extent - newext */
cur->bc_rec.r.rm_flags &= ~XFS_RMAP_UNWRITTEN; cur->bc_rec.r.rm_flags &= ~XFS_RMAP_UNWRITTEN;
cur->bc_rec.r.rm_flags |= newext; cur->bc_rec.r.rm_flags |= newext;
@ -1314,7 +1446,10 @@ xfs_rmap_convert(
error = xfs_btree_insert(cur, &i); error = xfs_btree_insert(cur, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
break; break;
case RMAP_LEFT_FILLING | RMAP_LEFT_CONTIG | RMAP_RIGHT_CONTIG: case RMAP_LEFT_FILLING | RMAP_LEFT_CONTIG | RMAP_RIGHT_CONTIG:
@ -1383,7 +1518,10 @@ xfs_rmap_convert_shared(
&PREV, &i); &PREV, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
ASSERT(PREV.rm_offset <= offset); ASSERT(PREV.rm_offset <= offset);
ASSERT(PREV.rm_offset + PREV.rm_blockcount >= new_endoff); ASSERT(PREV.rm_offset + PREV.rm_blockcount >= new_endoff);
@ -1406,9 +1544,12 @@ xfs_rmap_convert_shared(
goto done; goto done;
if (i) { if (i) {
state |= RMAP_LEFT_VALID; state |= RMAP_LEFT_VALID;
XFS_WANT_CORRUPTED_GOTO(mp, if (XFS_IS_CORRUPT(mp,
LEFT.rm_startblock + LEFT.rm_blockcount <= bno, LEFT.rm_startblock + LEFT.rm_blockcount >
done); bno)) {
error = -EFSCORRUPTED;
goto done;
}
if (xfs_rmap_is_mergeable(&LEFT, owner, newext)) if (xfs_rmap_is_mergeable(&LEFT, owner, newext))
state |= RMAP_LEFT_CONTIG; state |= RMAP_LEFT_CONTIG;
} }
@ -1423,9 +1564,14 @@ xfs_rmap_convert_shared(
error = xfs_rmap_get_rec(cur, &RIGHT, &i); error = xfs_rmap_get_rec(cur, &RIGHT, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
XFS_WANT_CORRUPTED_GOTO(mp, bno + len <= RIGHT.rm_startblock, error = -EFSCORRUPTED;
done); goto done;
}
if (XFS_IS_CORRUPT(mp, bno + len > RIGHT.rm_startblock)) {
error = -EFSCORRUPTED;
goto done;
}
trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp, trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp,
cur->bc_private.a.agno, RIGHT.rm_startblock, cur->bc_private.a.agno, RIGHT.rm_startblock,
RIGHT.rm_blockcount, RIGHT.rm_owner, RIGHT.rm_blockcount, RIGHT.rm_owner,
@ -1472,7 +1618,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount += PREV.rm_blockcount + RIGHT.rm_blockcount; NEW.rm_blockcount += PREV.rm_blockcount + RIGHT.rm_blockcount;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1495,7 +1644,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount += PREV.rm_blockcount; NEW.rm_blockcount += PREV.rm_blockcount;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1518,7 +1670,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount += RIGHT.rm_blockcount; NEW.rm_blockcount += RIGHT.rm_blockcount;
NEW.rm_flags = RIGHT.rm_flags; NEW.rm_flags = RIGHT.rm_flags;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
@ -1538,7 +1693,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_flags = newext; NEW.rm_flags = newext;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1570,7 +1728,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount += len; NEW.rm_blockcount += len;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1612,7 +1773,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount = offset - NEW.rm_offset; NEW.rm_blockcount = offset - NEW.rm_offset;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1644,7 +1808,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount -= len; NEW.rm_blockcount -= len;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1679,7 +1846,10 @@ xfs_rmap_convert_shared(
NEW.rm_offset, NEW.rm_flags, &i); NEW.rm_offset, NEW.rm_flags, &i);
if (error) if (error)
goto done; goto done;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, done); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto done;
}
NEW.rm_blockcount = offset - NEW.rm_offset; NEW.rm_blockcount = offset - NEW.rm_offset;
error = xfs_rmap_update(cur, &NEW); error = xfs_rmap_update(cur, &NEW);
if (error) if (error)
@ -1765,25 +1935,44 @@ xfs_rmap_unmap_shared(
&ltrec, &i); &ltrec, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
ltoff = ltrec.rm_offset; ltoff = ltrec.rm_offset;
/* Make sure the extent we found covers the entire freeing range. */ /* Make sure the extent we found covers the entire freeing range. */
XFS_WANT_CORRUPTED_GOTO(mp, ltrec.rm_startblock <= bno && if (XFS_IS_CORRUPT(mp,
ltrec.rm_startblock + ltrec.rm_blockcount >= ltrec.rm_startblock > bno ||
bno + len, out_error); ltrec.rm_startblock + ltrec.rm_blockcount <
bno + len)) {
error = -EFSCORRUPTED;
goto out_error;
}
/* Make sure the owner matches what we expect to find in the tree. */ /* Make sure the owner matches what we expect to find in the tree. */
XFS_WANT_CORRUPTED_GOTO(mp, owner == ltrec.rm_owner, out_error); if (XFS_IS_CORRUPT(mp, owner != ltrec.rm_owner)) {
error = -EFSCORRUPTED;
goto out_error;
}
/* Make sure the unwritten flag matches. */ /* Make sure the unwritten flag matches. */
XFS_WANT_CORRUPTED_GOTO(mp, (flags & XFS_RMAP_UNWRITTEN) == if (XFS_IS_CORRUPT(mp,
(ltrec.rm_flags & XFS_RMAP_UNWRITTEN), out_error); (flags & XFS_RMAP_UNWRITTEN) !=
(ltrec.rm_flags & XFS_RMAP_UNWRITTEN))) {
error = -EFSCORRUPTED;
goto out_error;
}
/* Check the offset. */ /* Check the offset. */
XFS_WANT_CORRUPTED_GOTO(mp, ltrec.rm_offset <= offset, out_error); if (XFS_IS_CORRUPT(mp, ltrec.rm_offset > offset)) {
XFS_WANT_CORRUPTED_GOTO(mp, offset <= ltoff + ltrec.rm_blockcount, error = -EFSCORRUPTED;
out_error); goto out_error;
}
if (XFS_IS_CORRUPT(mp, offset > ltoff + ltrec.rm_blockcount)) {
error = -EFSCORRUPTED;
goto out_error;
}
if (ltrec.rm_startblock == bno && ltrec.rm_blockcount == len) { if (ltrec.rm_startblock == bno && ltrec.rm_blockcount == len) {
/* Exact match, simply remove the record from rmap tree. */ /* Exact match, simply remove the record from rmap tree. */
@ -1836,7 +2025,10 @@ xfs_rmap_unmap_shared(
ltrec.rm_offset, ltrec.rm_flags, &i); ltrec.rm_offset, ltrec.rm_flags, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
ltrec.rm_blockcount -= len; ltrec.rm_blockcount -= len;
error = xfs_rmap_update(cur, &ltrec); error = xfs_rmap_update(cur, &ltrec);
if (error) if (error)
@ -1862,7 +2054,10 @@ xfs_rmap_unmap_shared(
ltrec.rm_offset, ltrec.rm_flags, &i); ltrec.rm_offset, ltrec.rm_flags, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
ltrec.rm_blockcount = bno - ltrec.rm_startblock; ltrec.rm_blockcount = bno - ltrec.rm_startblock;
error = xfs_rmap_update(cur, &ltrec); error = xfs_rmap_update(cur, &ltrec);
if (error) if (error)
@ -1938,7 +2133,10 @@ xfs_rmap_map_shared(
error = xfs_rmap_get_rec(cur, &gtrec, &have_gt); error = xfs_rmap_get_rec(cur, &gtrec, &have_gt);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_gt == 1, out_error); if (XFS_IS_CORRUPT(mp, have_gt != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp, trace_xfs_rmap_find_right_neighbor_result(cur->bc_mp,
cur->bc_private.a.agno, gtrec.rm_startblock, cur->bc_private.a.agno, gtrec.rm_startblock,
gtrec.rm_blockcount, gtrec.rm_owner, gtrec.rm_blockcount, gtrec.rm_owner,
@ -1987,7 +2185,10 @@ xfs_rmap_map_shared(
ltrec.rm_offset, ltrec.rm_flags, &i); ltrec.rm_offset, ltrec.rm_flags, &i);
if (error) if (error)
goto out_error; goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_error); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_error;
}
error = xfs_rmap_update(cur, &ltrec); error = xfs_rmap_update(cur, &ltrec);
if (error) if (error)
@ -2199,7 +2400,7 @@ xfs_rmap_finish_one(
error = xfs_free_extent_fix_freelist(tp, agno, &agbp); error = xfs_free_extent_fix_freelist(tp, agno, &agbp);
if (error) if (error)
return error; return error;
if (!agbp) if (XFS_IS_CORRUPT(tp->t_mountp, !agbp))
return -EFSCORRUPTED; return -EFSCORRUPTED;
rcur = xfs_rmapbt_init_cursor(mp, tp, agbp, agno); rcur = xfs_rmapbt_init_cursor(mp, tp, agbp, agno);

View File

@ -15,7 +15,7 @@
#include "xfs_bmap.h" #include "xfs_bmap.h"
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_rtalloc.h" #include "xfs_rtalloc.h"
#include "xfs_error.h"
/* /*
* Realtime allocator bitmap functions shared with userspace. * Realtime allocator bitmap functions shared with userspace.
@ -70,7 +70,7 @@ xfs_rtbuf_get(
if (error) if (error)
return error; return error;
if (nmap == 0 || !xfs_bmap_is_real_extent(&map)) if (XFS_IS_CORRUPT(mp, nmap == 0 || !xfs_bmap_is_real_extent(&map)))
return -EFSCORRUPTED; return -EFSCORRUPTED;
ASSERT(map.br_startblock != NULLFSBLOCK); ASSERT(map.br_startblock != NULLFSBLOCK);

View File

@ -10,6 +10,7 @@
#include "xfs_log_format.h" #include "xfs_log_format.h"
#include "xfs_trans_resv.h" #include "xfs_trans_resv.h"
#include "xfs_bit.h" #include "xfs_bit.h"
#include "xfs_sb.h"
#include "xfs_mount.h" #include "xfs_mount.h"
#include "xfs_ialloc.h" #include "xfs_ialloc.h"
#include "xfs_alloc.h" #include "xfs_alloc.h"

View File

@ -55,7 +55,7 @@ xfs_trans_ichgtime(
int flags) int flags)
{ {
struct inode *inode = VFS_I(ip); struct inode *inode = VFS_I(ip);
struct timespec64 tv; struct timespec64 tv;
ASSERT(tp); ASSERT(tp);
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
@ -66,10 +66,8 @@ xfs_trans_ichgtime(
inode->i_mtime = tv; inode->i_mtime = tv;
if (flags & XFS_ICHGTIME_CHG) if (flags & XFS_ICHGTIME_CHG)
inode->i_ctime = tv; inode->i_ctime = tv;
if (flags & XFS_ICHGTIME_CREATE) { if (flags & XFS_ICHGTIME_CREATE)
ip->i_d.di_crtime.t_sec = (int32_t)tv.tv_sec; ip->i_d.di_crtime = tv;
ip->i_d.di_crtime.t_nsec = (int32_t)tv.tv_nsec;
}
} }
/* /*

View File

@ -718,7 +718,7 @@ xfs_calc_clear_agi_bucket_reservation(
/* /*
* Adjusting quota limits. * Adjusting quota limits.
* the xfs_disk_dquot_t: sizeof(struct xfs_disk_dquot) * the disk quota buffer: sizeof(struct xfs_disk_dquot)
*/ */
STATIC uint STATIC uint
xfs_calc_qm_setqlim_reservation(void) xfs_calc_qm_setqlim_reservation(void)
@ -742,7 +742,7 @@ xfs_calc_qm_dqalloc_reservation(
/* /*
* Turning off quotas. * Turning off quotas.
* the xfs_qoff_logitem_t: sizeof(struct xfs_qoff_logitem) * 2 * the quota off logitems: sizeof(struct xfs_qoff_logitem) * 2
* the superblock for the quota flags: sector size * the superblock for the quota flags: sector size
*/ */
STATIC uint STATIC uint
@ -755,7 +755,7 @@ xfs_calc_qm_quotaoff_reservation(
/* /*
* End of turning off quotas. * End of turning off quotas.
* the xfs_qoff_logitem_t: sizeof(struct xfs_qoff_logitem) * 2 * the quota off logitems: sizeof(struct xfs_qoff_logitem) * 2
*/ */
STATIC uint STATIC uint
xfs_calc_qm_quotaoff_end_reservation(void) xfs_calc_qm_quotaoff_end_reservation(void)

View File

@ -21,7 +21,6 @@ typedef int32_t xfs_suminfo_t; /* type of bitmap summary info */
typedef uint32_t xfs_rtword_t; /* word type for bitmap manipulations */ typedef uint32_t xfs_rtword_t; /* word type for bitmap manipulations */
typedef int64_t xfs_lsn_t; /* log sequence number */ typedef int64_t xfs_lsn_t; /* log sequence number */
typedef int32_t xfs_tid_t; /* transaction identifier */
typedef uint32_t xfs_dablk_t; /* dir/attr block number (in file) */ typedef uint32_t xfs_dablk_t; /* dir/attr block number (in file) */
typedef uint32_t xfs_dahash_t; /* dir/attr hash value */ typedef uint32_t xfs_dahash_t; /* dir/attr hash value */
@ -33,7 +32,6 @@ typedef uint64_t xfs_fileoff_t; /* block number in a file */
typedef uint64_t xfs_filblks_t; /* number of blocks in a file */ typedef uint64_t xfs_filblks_t; /* number of blocks in a file */
typedef int64_t xfs_srtblock_t; /* signed version of xfs_rtblock_t */ typedef int64_t xfs_srtblock_t; /* signed version of xfs_rtblock_t */
typedef int64_t xfs_sfiloff_t; /* signed block number in a file */
/* /*
* New verifiers will return the instruction address of the failing check. * New verifiers will return the instruction address of the failing check.

View File

@ -398,15 +398,14 @@ out:
STATIC int STATIC int
xchk_xattr_rec( xchk_xattr_rec(
struct xchk_da_btree *ds, struct xchk_da_btree *ds,
int level, int level)
void *rec)
{ {
struct xfs_mount *mp = ds->state->mp; struct xfs_mount *mp = ds->state->mp;
struct xfs_attr_leaf_entry *ent = rec; struct xfs_da_state_blk *blk = &ds->state->path.blk[level];
struct xfs_da_state_blk *blk;
struct xfs_attr_leaf_name_local *lentry; struct xfs_attr_leaf_name_local *lentry;
struct xfs_attr_leaf_name_remote *rentry; struct xfs_attr_leaf_name_remote *rentry;
struct xfs_buf *bp; struct xfs_buf *bp;
struct xfs_attr_leaf_entry *ent;
xfs_dahash_t calc_hash; xfs_dahash_t calc_hash;
xfs_dahash_t hash; xfs_dahash_t hash;
int nameidx; int nameidx;
@ -414,7 +413,9 @@ xchk_xattr_rec(
unsigned int badflags; unsigned int badflags;
int error; int error;
blk = &ds->state->path.blk[level]; ASSERT(blk->magic == XFS_ATTR_LEAF_MAGIC);
ent = xfs_attr3_leaf_entryp(blk->bp->b_addr) + blk->index;
/* Check the whole block, if necessary. */ /* Check the whole block, if necessary. */
error = xchk_xattr_block(ds, level); error = xchk_xattr_block(ds, level);

View File

@ -294,5 +294,6 @@ xfs_bitmap_set_btblocks(
struct xfs_bitmap *bitmap, struct xfs_bitmap *bitmap,
struct xfs_btree_cur *cur) struct xfs_btree_cur *cur)
{ {
return xfs_btree_visit_blocks(cur, xfs_bitmap_collect_btblock, bitmap); return xfs_btree_visit_blocks(cur, xfs_bitmap_collect_btblock,
XFS_BTREE_VISIT_ALL, bitmap);
} }

View File

@ -14,8 +14,15 @@
static inline bool static inline bool
xchk_should_terminate( xchk_should_terminate(
struct xfs_scrub *sc, struct xfs_scrub *sc,
int *error) int *error)
{ {
/*
* If preemption is disabled, we need to yield to the scheduler every
* few seconds so that we don't run afoul of the soft lockup watchdog
* or RCU stall detector.
*/
cond_resched();
if (fatal_signal_pending(current)) { if (fatal_signal_pending(current)) {
if (*error == 0) if (*error == 0)
*error = -EAGAIN; *error = -EAGAIN;

View File

@ -77,40 +77,18 @@ xchk_da_set_corrupt(
__return_address); __return_address);
} }
/* Find an entry at a certain level in a da btree. */ static struct xfs_da_node_entry *
STATIC void * xchk_da_btree_node_entry(
xchk_da_btree_entry( struct xchk_da_btree *ds,
struct xchk_da_btree *ds, int level)
int level,
int rec)
{ {
char *ents; struct xfs_da_state_blk *blk = &ds->state->path.blk[level];
struct xfs_da_state_blk *blk; struct xfs_da3_icnode_hdr hdr;
void *baddr;
/* Dispatch the entry finding function. */ ASSERT(blk->magic == XFS_DA_NODE_MAGIC);
blk = &ds->state->path.blk[level];
baddr = blk->bp->b_addr;
switch (blk->magic) {
case XFS_ATTR_LEAF_MAGIC:
case XFS_ATTR3_LEAF_MAGIC:
ents = (char *)xfs_attr3_leaf_entryp(baddr);
return ents + (rec * sizeof(struct xfs_attr_leaf_entry));
case XFS_DIR2_LEAFN_MAGIC:
case XFS_DIR3_LEAFN_MAGIC:
ents = (char *)ds->dargs.dp->d_ops->leaf_ents_p(baddr);
return ents + (rec * sizeof(struct xfs_dir2_leaf_entry));
case XFS_DIR2_LEAF1_MAGIC:
case XFS_DIR3_LEAF1_MAGIC:
ents = (char *)ds->dargs.dp->d_ops->leaf_ents_p(baddr);
return ents + (rec * sizeof(struct xfs_dir2_leaf_entry));
case XFS_DA_NODE_MAGIC:
case XFS_DA3_NODE_MAGIC:
ents = (char *)ds->dargs.dp->d_ops->node_tree_p(baddr);
return ents + (rec * sizeof(struct xfs_da_node_entry));
}
return NULL; xfs_da3_node_hdr_from_disk(ds->sc->mp, &hdr, blk->bp->b_addr);
return hdr.btree + blk->index;
} }
/* Scrub a da btree hash (key). */ /* Scrub a da btree hash (key). */
@ -120,7 +98,6 @@ xchk_da_btree_hash(
int level, int level,
__be32 *hashp) __be32 *hashp)
{ {
struct xfs_da_state_blk *blks;
struct xfs_da_node_entry *entry; struct xfs_da_node_entry *entry;
xfs_dahash_t hash; xfs_dahash_t hash;
xfs_dahash_t parent_hash; xfs_dahash_t parent_hash;
@ -135,8 +112,7 @@ xchk_da_btree_hash(
return 0; return 0;
/* Is this hash no larger than the parent hash? */ /* Is this hash no larger than the parent hash? */
blks = ds->state->path.blk; entry = xchk_da_btree_node_entry(ds, level - 1);
entry = xchk_da_btree_entry(ds, level - 1, blks[level - 1].index);
parent_hash = be32_to_cpu(entry->hashval); parent_hash = be32_to_cpu(entry->hashval);
if (parent_hash < hash) if (parent_hash < hash)
xchk_da_set_corrupt(ds, level); xchk_da_set_corrupt(ds, level);
@ -355,8 +331,8 @@ xchk_da_btree_block(
goto out_nobuf; goto out_nobuf;
/* Read the buffer. */ /* Read the buffer. */
error = xfs_da_read_buf(dargs->trans, dargs->dp, blk->blkno, -2, error = xfs_da_read_buf(dargs->trans, dargs->dp, blk->blkno,
&blk->bp, dargs->whichfork, XFS_DABUF_MAP_HOLE_OK, &blk->bp, dargs->whichfork,
&xchk_da_btree_buf_ops); &xchk_da_btree_buf_ops);
if (!xchk_da_process_error(ds, level, &error)) if (!xchk_da_process_error(ds, level, &error))
goto out_nobuf; goto out_nobuf;
@ -433,8 +409,8 @@ xchk_da_btree_block(
XFS_BLFT_DA_NODE_BUF); XFS_BLFT_DA_NODE_BUF);
blk->magic = XFS_DA_NODE_MAGIC; blk->magic = XFS_DA_NODE_MAGIC;
node = blk->bp->b_addr; node = blk->bp->b_addr;
ip->d_ops->node_hdr_from_disk(&nodehdr, node); xfs_da3_node_hdr_from_disk(ip->i_mount, &nodehdr, node);
btree = ip->d_ops->node_tree_p(node); btree = nodehdr.btree;
*pmaxrecs = nodehdr.count; *pmaxrecs = nodehdr.count;
blk->hashval = be32_to_cpu(btree[*pmaxrecs - 1].hashval); blk->hashval = be32_to_cpu(btree[*pmaxrecs - 1].hashval);
if (level == 0) { if (level == 0) {
@ -479,14 +455,12 @@ xchk_da_btree(
struct xfs_mount *mp = sc->mp; struct xfs_mount *mp = sc->mp;
struct xfs_da_state_blk *blks; struct xfs_da_state_blk *blks;
struct xfs_da_node_entry *key; struct xfs_da_node_entry *key;
void *rec;
xfs_dablk_t blkno; xfs_dablk_t blkno;
int level; int level;
int error; int error;
/* Skip short format data structures; no btree to scan. */ /* Skip short format data structures; no btree to scan. */
if (XFS_IFORK_FORMAT(sc->ip, whichfork) != XFS_DINODE_FMT_EXTENTS && if (!xfs_ifork_has_extents(sc->ip, whichfork))
XFS_IFORK_FORMAT(sc->ip, whichfork) != XFS_DINODE_FMT_BTREE)
return 0; return 0;
/* Set up initial da state. */ /* Set up initial da state. */
@ -538,9 +512,7 @@ xchk_da_btree(
} }
/* Dispatch record scrubbing. */ /* Dispatch record scrubbing. */
rec = xchk_da_btree_entry(&ds, level, error = scrub_fn(&ds, level);
blks[level].index);
error = scrub_fn(&ds, level, rec);
if (error) if (error)
break; break;
if (xchk_should_terminate(sc, &error) || if (xchk_should_terminate(sc, &error) ||
@ -562,7 +534,7 @@ xchk_da_btree(
} }
/* Hashes in order for scrub? */ /* Hashes in order for scrub? */
key = xchk_da_btree_entry(&ds, level, blks[level].index); key = xchk_da_btree_node_entry(&ds, level);
error = xchk_da_btree_hash(&ds, level, &key->hashval); error = xchk_da_btree_hash(&ds, level, &key->hashval);
if (error) if (error)
goto out; goto out;

View File

@ -28,8 +28,7 @@ struct xchk_da_btree {
int tree_level; int tree_level;
}; };
typedef int (*xchk_da_btree_rec_fn)(struct xchk_da_btree *ds, typedef int (*xchk_da_btree_rec_fn)(struct xchk_da_btree *ds, int level);
int level, void *rec);
/* Check for da btree operation errors. */ /* Check for da btree operation errors. */
bool xchk_da_process_error(struct xchk_da_btree *ds, int level, int *error); bool xchk_da_process_error(struct xchk_da_btree *ds, int level, int *error);

View File

@ -113,6 +113,9 @@ xchk_dir_actor(
offset = xfs_dir2_db_to_da(mp->m_dir_geo, offset = xfs_dir2_db_to_da(mp->m_dir_geo,
xfs_dir2_dataptr_to_db(mp->m_dir_geo, pos)); xfs_dir2_dataptr_to_db(mp->m_dir_geo, pos));
if (xchk_should_terminate(sdc->sc, &error))
return error;
/* Does this inode number make sense? */ /* Does this inode number make sense? */
if (!xfs_verify_dir_ino(mp, ino)) { if (!xfs_verify_dir_ino(mp, ino)) {
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset); xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
@ -179,15 +182,17 @@ out:
STATIC int STATIC int
xchk_dir_rec( xchk_dir_rec(
struct xchk_da_btree *ds, struct xchk_da_btree *ds,
int level, int level)
void *rec)
{ {
struct xfs_da_state_blk *blk = &ds->state->path.blk[level];
struct xfs_mount *mp = ds->state->mp; struct xfs_mount *mp = ds->state->mp;
struct xfs_dir2_leaf_entry *ent = rec;
struct xfs_inode *dp = ds->dargs.dp; struct xfs_inode *dp = ds->dargs.dp;
struct xfs_da_geometry *geo = mp->m_dir_geo;
struct xfs_dir2_data_entry *dent; struct xfs_dir2_data_entry *dent;
struct xfs_buf *bp; struct xfs_buf *bp;
char *p, *endp; struct xfs_dir2_leaf_entry *ent;
unsigned int end;
unsigned int iter_off;
xfs_ino_t ino; xfs_ino_t ino;
xfs_dablk_t rec_bno; xfs_dablk_t rec_bno;
xfs_dir2_db_t db; xfs_dir2_db_t db;
@ -195,9 +200,16 @@ xchk_dir_rec(
xfs_dir2_dataptr_t ptr; xfs_dir2_dataptr_t ptr;
xfs_dahash_t calc_hash; xfs_dahash_t calc_hash;
xfs_dahash_t hash; xfs_dahash_t hash;
struct xfs_dir3_icleaf_hdr hdr;
unsigned int tag; unsigned int tag;
int error; int error;
ASSERT(blk->magic == XFS_DIR2_LEAF1_MAGIC ||
blk->magic == XFS_DIR2_LEAFN_MAGIC);
xfs_dir2_leaf_hdr_from_disk(mp, &hdr, blk->bp->b_addr);
ent = hdr.ents + blk->index;
/* Check the hash of the entry. */ /* Check the hash of the entry. */
error = xchk_da_btree_hash(ds, level, &ent->hashval); error = xchk_da_btree_hash(ds, level, &ent->hashval);
if (error) if (error)
@ -209,15 +221,16 @@ xchk_dir_rec(
return 0; return 0;
/* Find the directory entry's location. */ /* Find the directory entry's location. */
db = xfs_dir2_dataptr_to_db(mp->m_dir_geo, ptr); db = xfs_dir2_dataptr_to_db(geo, ptr);
off = xfs_dir2_dataptr_to_off(mp->m_dir_geo, ptr); off = xfs_dir2_dataptr_to_off(geo, ptr);
rec_bno = xfs_dir2_db_to_da(mp->m_dir_geo, db); rec_bno = xfs_dir2_db_to_da(geo, db);
if (rec_bno >= mp->m_dir_geo->leafblk) { if (rec_bno >= geo->leafblk) {
xchk_da_set_corrupt(ds, level); xchk_da_set_corrupt(ds, level);
goto out; goto out;
} }
error = xfs_dir3_data_read(ds->dargs.trans, dp, rec_bno, -2, &bp); error = xfs_dir3_data_read(ds->dargs.trans, dp, rec_bno,
XFS_DABUF_MAP_HOLE_OK, &bp);
if (!xchk_fblock_process_error(ds->sc, XFS_DATA_FORK, rec_bno, if (!xchk_fblock_process_error(ds->sc, XFS_DATA_FORK, rec_bno,
&error)) &error))
goto out; goto out;
@ -230,38 +243,37 @@ xchk_dir_rec(
if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT) if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out_relse; goto out_relse;
dent = (struct xfs_dir2_data_entry *)(((char *)bp->b_addr) + off); dent = bp->b_addr + off;
/* Make sure we got a real directory entry. */ /* Make sure we got a real directory entry. */
p = (char *)mp->m_dir_inode_ops->data_entry_p(bp->b_addr); iter_off = geo->data_entry_offset;
endp = xfs_dir3_data_endp(mp->m_dir_geo, bp->b_addr); end = xfs_dir3_data_end_offset(geo, bp->b_addr);
if (!endp) { if (!end) {
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno); xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse; goto out_relse;
} }
while (p < endp) { for (;;) {
struct xfs_dir2_data_entry *dep; struct xfs_dir2_data_entry *dep = bp->b_addr + iter_off;
struct xfs_dir2_data_unused *dup; struct xfs_dir2_data_unused *dup = bp->b_addr + iter_off;
if (iter_off >= end) {
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse;
}
dup = (struct xfs_dir2_data_unused *)p;
if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
p += be16_to_cpu(dup->length); iter_off += be16_to_cpu(dup->length);
continue; continue;
} }
dep = (struct xfs_dir2_data_entry *)p;
if (dep == dent) if (dep == dent)
break; break;
p += mp->m_dir_inode_ops->data_entsize(dep->namelen); iter_off += xfs_dir2_data_entsize(mp, dep->namelen);
}
if (p >= endp) {
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse;
} }
/* Retrieve the entry, sanity check it, and compare hashes. */ /* Retrieve the entry, sanity check it, and compare hashes. */
ino = be64_to_cpu(dent->inumber); ino = be64_to_cpu(dent->inumber);
hash = be32_to_cpu(ent->hashval); hash = be32_to_cpu(ent->hashval);
tag = be16_to_cpup(dp->d_ops->data_entry_tag_p(dent)); tag = be16_to_cpup(xfs_dir2_data_entry_tag_p(mp, dent));
if (!xfs_verify_dir_ino(mp, ino) || tag != off) if (!xfs_verify_dir_ino(mp, ino) || tag != off)
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno); xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
if (dent->namelen == 0) { if (dent->namelen == 0) {
@ -319,19 +331,15 @@ xchk_directory_data_bestfree(
struct xfs_buf *bp; struct xfs_buf *bp;
struct xfs_dir2_data_free *bf; struct xfs_dir2_data_free *bf;
struct xfs_mount *mp = sc->mp; struct xfs_mount *mp = sc->mp;
const struct xfs_dir_ops *d_ops;
char *ptr;
char *endptr;
u16 tag; u16 tag;
unsigned int nr_bestfrees = 0; unsigned int nr_bestfrees = 0;
unsigned int nr_frees = 0; unsigned int nr_frees = 0;
unsigned int smallest_bestfree; unsigned int smallest_bestfree;
int newlen; int newlen;
int offset; unsigned int offset;
unsigned int end;
int error; int error;
d_ops = sc->ip->d_ops;
if (is_block) { if (is_block) {
/* dir block format */ /* dir block format */
if (lblk != XFS_B_TO_FSBT(mp, XFS_DIR2_DATA_OFFSET)) if (lblk != XFS_B_TO_FSBT(mp, XFS_DIR2_DATA_OFFSET))
@ -339,7 +347,7 @@ xchk_directory_data_bestfree(
error = xfs_dir3_block_read(sc->tp, sc->ip, &bp); error = xfs_dir3_block_read(sc->tp, sc->ip, &bp);
} else { } else {
/* dir data format */ /* dir data format */
error = xfs_dir3_data_read(sc->tp, sc->ip, lblk, -1, &bp); error = xfs_dir3_data_read(sc->tp, sc->ip, lblk, 0, &bp);
} }
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error)) if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out; goto out;
@ -351,7 +359,7 @@ xchk_directory_data_bestfree(
goto out_buf; goto out_buf;
/* Do the bestfrees correspond to actual free space? */ /* Do the bestfrees correspond to actual free space? */
bf = d_ops->data_bestfree_p(bp->b_addr); bf = xfs_dir2_data_bestfree_p(mp, bp->b_addr);
smallest_bestfree = UINT_MAX; smallest_bestfree = UINT_MAX;
for (dfp = &bf[0]; dfp < &bf[XFS_DIR2_DATA_FD_COUNT]; dfp++) { for (dfp = &bf[0]; dfp < &bf[XFS_DIR2_DATA_FD_COUNT]; dfp++) {
offset = be16_to_cpu(dfp->offset); offset = be16_to_cpu(dfp->offset);
@ -361,13 +369,13 @@ xchk_directory_data_bestfree(
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf; goto out_buf;
} }
dup = (struct xfs_dir2_data_unused *)(bp->b_addr + offset); dup = bp->b_addr + offset;
tag = be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)); tag = be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup));
/* bestfree doesn't match the entry it points at? */ /* bestfree doesn't match the entry it points at? */
if (dup->freetag != cpu_to_be16(XFS_DIR2_DATA_FREE_TAG) || if (dup->freetag != cpu_to_be16(XFS_DIR2_DATA_FREE_TAG) ||
be16_to_cpu(dup->length) != be16_to_cpu(dfp->length) || be16_to_cpu(dup->length) != be16_to_cpu(dfp->length) ||
tag != ((char *)dup - (char *)bp->b_addr)) { tag != offset) {
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf; goto out_buf;
} }
@ -383,30 +391,30 @@ xchk_directory_data_bestfree(
} }
/* Make sure the bestfrees are actually the best free spaces. */ /* Make sure the bestfrees are actually the best free spaces. */
ptr = (char *)d_ops->data_entry_p(bp->b_addr); offset = mp->m_dir_geo->data_entry_offset;
endptr = xfs_dir3_data_endp(mp->m_dir_geo, bp->b_addr); end = xfs_dir3_data_end_offset(mp->m_dir_geo, bp->b_addr);
/* Iterate the entries, stopping when we hit or go past the end. */ /* Iterate the entries, stopping when we hit or go past the end. */
while (ptr < endptr) { while (offset < end) {
dup = (struct xfs_dir2_data_unused *)ptr; dup = bp->b_addr + offset;
/* Skip real entries */ /* Skip real entries */
if (dup->freetag != cpu_to_be16(XFS_DIR2_DATA_FREE_TAG)) { if (dup->freetag != cpu_to_be16(XFS_DIR2_DATA_FREE_TAG)) {
struct xfs_dir2_data_entry *dep; struct xfs_dir2_data_entry *dep = bp->b_addr + offset;
dep = (struct xfs_dir2_data_entry *)ptr; newlen = xfs_dir2_data_entsize(mp, dep->namelen);
newlen = d_ops->data_entsize(dep->namelen);
if (newlen <= 0) { if (newlen <= 0) {
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,
lblk); lblk);
goto out_buf; goto out_buf;
} }
ptr += newlen; offset += newlen;
continue; continue;
} }
/* Spot check this free entry */ /* Spot check this free entry */
tag = be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup)); tag = be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup));
if (tag != ((char *)dup - (char *)bp->b_addr)) { if (tag != offset) {
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf; goto out_buf;
} }
@ -425,13 +433,13 @@ xchk_directory_data_bestfree(
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf; goto out_buf;
} }
ptr += newlen; offset += newlen;
if (ptr <= endptr) if (offset <= end)
nr_frees++; nr_frees++;
} }
/* We're required to fill all the space. */ /* We're required to fill all the space. */
if (ptr != endptr) if (offset != end)
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
/* Did we see at least as many free slots as there are bestfrees? */ /* Did we see at least as many free slots as there are bestfrees? */
@ -458,7 +466,7 @@ xchk_directory_check_freesp(
{ {
struct xfs_dir2_data_free *dfp; struct xfs_dir2_data_free *dfp;
dfp = sc->ip->d_ops->data_bestfree_p(dbp->b_addr); dfp = xfs_dir2_data_bestfree_p(sc->mp, dbp->b_addr);
if (len != be16_to_cpu(dfp->length)) if (len != be16_to_cpu(dfp->length))
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
@ -475,12 +483,10 @@ xchk_directory_leaf1_bestfree(
xfs_dablk_t lblk) xfs_dablk_t lblk)
{ {
struct xfs_dir3_icleaf_hdr leafhdr; struct xfs_dir3_icleaf_hdr leafhdr;
struct xfs_dir2_leaf_entry *ents;
struct xfs_dir2_leaf_tail *ltp; struct xfs_dir2_leaf_tail *ltp;
struct xfs_dir2_leaf *leaf; struct xfs_dir2_leaf *leaf;
struct xfs_buf *dbp; struct xfs_buf *dbp;
struct xfs_buf *bp; struct xfs_buf *bp;
const struct xfs_dir_ops *d_ops = sc->ip->d_ops;
struct xfs_da_geometry *geo = sc->mp->m_dir_geo; struct xfs_da_geometry *geo = sc->mp->m_dir_geo;
__be16 *bestp; __be16 *bestp;
__u16 best; __u16 best;
@ -492,14 +498,13 @@ xchk_directory_leaf1_bestfree(
int error; int error;
/* Read the free space block. */ /* Read the free space block. */
error = xfs_dir3_leaf_read(sc->tp, sc->ip, lblk, -1, &bp); error = xfs_dir3_leaf_read(sc->tp, sc->ip, lblk, &bp);
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error)) if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out; goto out;
xchk_buffer_recheck(sc, bp); xchk_buffer_recheck(sc, bp);
leaf = bp->b_addr; leaf = bp->b_addr;
d_ops->leaf_hdr_from_disk(&leafhdr, leaf); xfs_dir2_leaf_hdr_from_disk(sc->ip->i_mount, &leafhdr, leaf);
ents = d_ops->leaf_ents_p(leaf);
ltp = xfs_dir2_leaf_tail_p(geo, leaf); ltp = xfs_dir2_leaf_tail_p(geo, leaf);
bestcount = be32_to_cpu(ltp->bestcount); bestcount = be32_to_cpu(ltp->bestcount);
bestp = xfs_dir2_leaf_bests_p(ltp); bestp = xfs_dir2_leaf_bests_p(ltp);
@ -521,24 +526,25 @@ xchk_directory_leaf1_bestfree(
} }
/* Is the leaf count even remotely sane? */ /* Is the leaf count even remotely sane? */
if (leafhdr.count > d_ops->leaf_max_ents(geo)) { if (leafhdr.count > geo->leaf_max_ents) {
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out; goto out;
} }
/* Leaves and bests don't overlap in leaf format. */ /* Leaves and bests don't overlap in leaf format. */
if ((char *)&ents[leafhdr.count] > (char *)bestp) { if ((char *)&leafhdr.ents[leafhdr.count] > (char *)bestp) {
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out; goto out;
} }
/* Check hash value order, count stale entries. */ /* Check hash value order, count stale entries. */
for (i = 0; i < leafhdr.count; i++) { for (i = 0; i < leafhdr.count; i++) {
hash = be32_to_cpu(ents[i].hashval); hash = be32_to_cpu(leafhdr.ents[i].hashval);
if (i > 0 && lasthash > hash) if (i > 0 && lasthash > hash)
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
lasthash = hash; lasthash = hash;
if (ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR)) if (leafhdr.ents[i].address ==
cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
stale++; stale++;
} }
if (leafhdr.stale != stale) if (leafhdr.stale != stale)
@ -552,7 +558,7 @@ xchk_directory_leaf1_bestfree(
if (best == NULLDATAOFF) if (best == NULLDATAOFF)
continue; continue;
error = xfs_dir3_data_read(sc->tp, sc->ip, error = xfs_dir3_data_read(sc->tp, sc->ip,
i * args->geo->fsbcount, -1, &dbp); i * args->geo->fsbcount, 0, &dbp);
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk,
&error)) &error))
break; break;
@ -575,7 +581,6 @@ xchk_directory_free_bestfree(
struct xfs_dir3_icfree_hdr freehdr; struct xfs_dir3_icfree_hdr freehdr;
struct xfs_buf *dbp; struct xfs_buf *dbp;
struct xfs_buf *bp; struct xfs_buf *bp;
__be16 *bestp;
__u16 best; __u16 best;
unsigned int stale = 0; unsigned int stale = 0;
int i; int i;
@ -595,17 +600,16 @@ xchk_directory_free_bestfree(
} }
/* Check all the entries. */ /* Check all the entries. */
sc->ip->d_ops->free_hdr_from_disk(&freehdr, bp->b_addr); xfs_dir2_free_hdr_from_disk(sc->ip->i_mount, &freehdr, bp->b_addr);
bestp = sc->ip->d_ops->free_bests_p(bp->b_addr); for (i = 0; i < freehdr.nvalid; i++) {
for (i = 0; i < freehdr.nvalid; i++, bestp++) { best = be16_to_cpu(freehdr.bests[i]);
best = be16_to_cpu(*bestp);
if (best == NULLDATAOFF) { if (best == NULLDATAOFF) {
stale++; stale++;
continue; continue;
} }
error = xfs_dir3_data_read(sc->tp, sc->ip, error = xfs_dir3_data_read(sc->tp, sc->ip,
(freehdr.firstdb + i) * args->geo->fsbcount, (freehdr.firstdb + i) * args->geo->fsbcount,
-1, &dbp); 0, &dbp);
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk,
&error)) &error))
break; break;

View File

@ -104,7 +104,7 @@ next_loop_perag:
pag = NULL; pag = NULL;
error = 0; error = 0;
if (fatal_signal_pending(current)) if (xchk_should_terminate(sc, &error))
break; break;
} }
@ -163,6 +163,7 @@ xchk_fscount_aggregate_agcounts(
uint64_t delayed; uint64_t delayed;
xfs_agnumber_t agno; xfs_agnumber_t agno;
int tries = 8; int tries = 8;
int error = 0;
retry: retry:
fsc->icount = 0; fsc->icount = 0;
@ -196,10 +197,13 @@ retry:
xfs_perag_put(pag); xfs_perag_put(pag);
if (fatal_signal_pending(current)) if (xchk_should_terminate(sc, &error))
break; break;
} }
if (error)
return error;
/* /*
* The global incore space reservation is taken from the incore * The global incore space reservation is taken from the incore
* counters, so leave that out of the computation. * counters, so leave that out of the computation.

View File

@ -11,6 +11,7 @@
#include "xfs_sb.h" #include "xfs_sb.h"
#include "xfs_health.h" #include "xfs_health.h"
#include "scrub/scrub.h" #include "scrub/scrub.h"
#include "scrub/health.h"
/* /*
* Scrub and In-Core Filesystem Health Assessments * Scrub and In-Core Filesystem Health Assessments

View File

@ -32,8 +32,10 @@ xchk_setup_parent(
struct xchk_parent_ctx { struct xchk_parent_ctx {
struct dir_context dc; struct dir_context dc;
struct xfs_scrub *sc;
xfs_ino_t ino; xfs_ino_t ino;
xfs_nlink_t nlink; xfs_nlink_t nlink;
bool cancelled;
}; };
/* Look for a single entry in a directory pointing to an inode. */ /* Look for a single entry in a directory pointing to an inode. */
@ -47,11 +49,21 @@ xchk_parent_actor(
unsigned type) unsigned type)
{ {
struct xchk_parent_ctx *spc; struct xchk_parent_ctx *spc;
int error = 0;
spc = container_of(dc, struct xchk_parent_ctx, dc); spc = container_of(dc, struct xchk_parent_ctx, dc);
if (spc->ino == ino) if (spc->ino == ino)
spc->nlink++; spc->nlink++;
return 0;
/*
* If we're facing a fatal signal, bail out. Store the cancellation
* status separately because the VFS readdir code squashes error codes
* into short directory reads.
*/
if (xchk_should_terminate(spc->sc, &error))
spc->cancelled = true;
return error;
} }
/* Count the number of dentries in the parent dir that point to this inode. */ /* Count the number of dentries in the parent dir that point to this inode. */
@ -62,10 +74,9 @@ xchk_parent_count_parent_dentries(
xfs_nlink_t *nlink) xfs_nlink_t *nlink)
{ {
struct xchk_parent_ctx spc = { struct xchk_parent_ctx spc = {
.dc.actor = xchk_parent_actor, .dc.actor = xchk_parent_actor,
.dc.pos = 0, .ino = sc->ip->i_ino,
.ino = sc->ip->i_ino, .sc = sc,
.nlink = 0,
}; };
size_t bufsize; size_t bufsize;
loff_t oldpos; loff_t oldpos;
@ -80,7 +91,7 @@ xchk_parent_count_parent_dentries(
*/ */
lock_mode = xfs_ilock_data_map_shared(parent); lock_mode = xfs_ilock_data_map_shared(parent);
if (parent->i_d.di_nextents > 0) if (parent->i_d.di_nextents > 0)
error = xfs_dir3_data_readahead(parent, 0, -1); error = xfs_dir3_data_readahead(parent, 0, 0);
xfs_iunlock(parent, lock_mode); xfs_iunlock(parent, lock_mode);
if (error) if (error)
return error; return error;
@ -97,6 +108,10 @@ xchk_parent_count_parent_dentries(
error = xfs_readdir(sc->tp, parent, &spc.dc, bufsize); error = xfs_readdir(sc->tp, parent, &spc.dc, bufsize);
if (error) if (error)
goto out; goto out;
if (spc.cancelled) {
error = -EAGAIN;
goto out;
}
if (oldpos == spc.dc.pos) if (oldpos == spc.dc.pos)
break; break;
oldpos = spc.dc.pos; oldpos = spc.dc.pos;

View File

@ -93,6 +93,10 @@ xchk_quota_item(
unsigned long long rcount; unsigned long long rcount;
xfs_ino_t fs_icount; xfs_ino_t fs_icount;
xfs_dqid_t id = be32_to_cpu(d->d_id); xfs_dqid_t id = be32_to_cpu(d->d_id);
int error = 0;
if (xchk_should_terminate(sc, &error))
return error;
/* /*
* Except for the root dquot, the actual dquot we got must either have * Except for the root dquot, the actual dquot we got must either have
@ -178,6 +182,9 @@ xchk_quota_item(
if (id != 0 && rhard != 0 && rcount > rhard) if (id != 0 && rhard != 0 && rcount > rhard)
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset); xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return -EFSCORRUPTED;
return 0; return 0;
} }

View File

@ -16,6 +16,7 @@
#include "xfs_qm.h" #include "xfs_qm.h"
#include "xfs_errortag.h" #include "xfs_errortag.h"
#include "xfs_error.h" #include "xfs_error.h"
#include "xfs_scrub.h"
#include "scrub/scrub.h" #include "scrub/scrub.h"
#include "scrub/common.h" #include "scrub/common.h"
#include "scrub/trace.h" #include "scrub/trace.h"

View File

@ -12,8 +12,10 @@
#include "xfs_inode.h" #include "xfs_inode.h"
#include "xfs_attr.h" #include "xfs_attr.h"
#include "xfs_trace.h" #include "xfs_trace.h"
#include <linux/posix_acl_xattr.h> #include "xfs_error.h"
#include "xfs_acl.h"
#include <linux/posix_acl_xattr.h>
/* /*
* Locking scheme: * Locking scheme:
@ -23,6 +25,7 @@
STATIC struct posix_acl * STATIC struct posix_acl *
xfs_acl_from_disk( xfs_acl_from_disk(
struct xfs_mount *mp,
const struct xfs_acl *aclp, const struct xfs_acl *aclp,
int len, int len,
int max_entries) int max_entries)
@ -32,11 +35,18 @@ xfs_acl_from_disk(
const struct xfs_acl_entry *ace; const struct xfs_acl_entry *ace;
unsigned int count, i; unsigned int count, i;
if (len < sizeof(*aclp)) if (len < sizeof(*aclp)) {
XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, aclp,
len);
return ERR_PTR(-EFSCORRUPTED); return ERR_PTR(-EFSCORRUPTED);
}
count = be32_to_cpu(aclp->acl_cnt); count = be32_to_cpu(aclp->acl_cnt);
if (count > max_entries || XFS_ACL_SIZE(count) != len) if (count > max_entries || XFS_ACL_SIZE(count) != len) {
XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, aclp,
len);
return ERR_PTR(-EFSCORRUPTED); return ERR_PTR(-EFSCORRUPTED);
}
acl = posix_acl_alloc(count, GFP_KERNEL); acl = posix_acl_alloc(count, GFP_KERNEL);
if (!acl) if (!acl)
@ -145,7 +155,7 @@ xfs_get_acl(struct inode *inode, int type)
if (error != -ENOATTR) if (error != -ENOATTR)
acl = ERR_PTR(error); acl = ERR_PTR(error);
} else { } else {
acl = xfs_acl_from_disk(xfs_acl, len, acl = xfs_acl_from_disk(ip->i_mount, xfs_acl, len,
XFS_ACL_MAX_ENTRIES(ip->i_mount)); XFS_ACL_MAX_ENTRIES(ip->i_mount));
kmem_free(xfs_acl); kmem_free(xfs_acl);
} }

View File

@ -30,32 +30,6 @@ XFS_WPC(struct iomap_writepage_ctx *ctx)
return container_of(ctx, struct xfs_writepage_ctx, ctx); return container_of(ctx, struct xfs_writepage_ctx, ctx);
} }
struct block_device *
xfs_find_bdev_for_inode(
struct inode *inode)
{
struct xfs_inode *ip = XFS_I(inode);
struct xfs_mount *mp = ip->i_mount;
if (XFS_IS_REALTIME_INODE(ip))
return mp->m_rtdev_targp->bt_bdev;
else
return mp->m_ddev_targp->bt_bdev;
}
struct dax_device *
xfs_find_daxdev_for_inode(
struct inode *inode)
{
struct xfs_inode *ip = XFS_I(inode);
struct xfs_mount *mp = ip->i_mount;
if (XFS_IS_REALTIME_INODE(ip))
return mp->m_rtdev_targp->bt_daxdev;
else
return mp->m_ddev_targp->bt_daxdev;
}
/* /*
* Fast and loose check if this write could update the on-disk inode size. * Fast and loose check if this write could update the on-disk inode size.
*/ */
@ -609,9 +583,11 @@ xfs_dax_writepages(
struct address_space *mapping, struct address_space *mapping,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); struct xfs_inode *ip = XFS_I(mapping->host);
xfs_iflags_clear(ip, XFS_ITRUNCATED);
return dax_writeback_mapping_range(mapping, return dax_writeback_mapping_range(mapping,
xfs_find_bdev_for_inode(mapping->host), wbc); xfs_inode_buftarg(ip)->bt_bdev, wbc);
} }
STATIC sector_t STATIC sector_t
@ -634,7 +610,7 @@ xfs_vm_bmap(
*/ */
if (xfs_is_cow_inode(ip) || XFS_IS_REALTIME_INODE(ip)) if (xfs_is_cow_inode(ip) || XFS_IS_REALTIME_INODE(ip))
return 0; return 0;
return iomap_bmap(mapping, block, &xfs_iomap_ops); return iomap_bmap(mapping, block, &xfs_read_iomap_ops);
} }
STATIC int STATIC int
@ -642,7 +618,7 @@ xfs_vm_readpage(
struct file *unused, struct file *unused,
struct page *page) struct page *page)
{ {
return iomap_readpage(page, &xfs_iomap_ops); return iomap_readpage(page, &xfs_read_iomap_ops);
} }
STATIC int STATIC int
@ -652,7 +628,7 @@ xfs_vm_readpages(
struct list_head *pages, struct list_head *pages,
unsigned nr_pages) unsigned nr_pages)
{ {
return iomap_readpages(mapping, pages, nr_pages, &xfs_iomap_ops); return iomap_readpages(mapping, pages, nr_pages, &xfs_read_iomap_ops);
} }
static int static int
@ -661,8 +637,9 @@ xfs_iomap_swapfile_activate(
struct file *swap_file, struct file *swap_file,
sector_t *span) sector_t *span)
{ {
sis->bdev = xfs_find_bdev_for_inode(file_inode(swap_file)); sis->bdev = xfs_inode_buftarg(XFS_I(file_inode(swap_file)))->bt_bdev;
return iomap_swapfile_activate(sis, swap_file, span, &xfs_iomap_ops); return iomap_swapfile_activate(sis, swap_file, span,
&xfs_read_iomap_ops);
} }
const struct address_space_operations xfs_address_space_operations = { const struct address_space_operations xfs_address_space_operations = {

View File

@ -11,7 +11,4 @@ extern const struct address_space_operations xfs_dax_aops;
int xfs_setfilesize(struct xfs_inode *ip, xfs_off_t offset, size_t size); int xfs_setfilesize(struct xfs_inode *ip, xfs_off_t offset, size_t size);
extern struct block_device *xfs_find_bdev_for_inode(struct inode *);
extern struct dax_device *xfs_find_daxdev_for_inode(struct inode *);
#endif /* __XFS_AOPS_H__ */ #endif /* __XFS_AOPS_H__ */

View File

@ -22,6 +22,7 @@
#include "xfs_attr_leaf.h" #include "xfs_attr_leaf.h"
#include "xfs_quota.h" #include "xfs_quota.h"
#include "xfs_dir2.h" #include "xfs_dir2.h"
#include "xfs_error.h"
/* /*
* Look at all the extents for this logical region, * Look at all the extents for this logical region,
@ -190,37 +191,35 @@ xfs_attr3_leaf_inactive(
*/ */
STATIC int STATIC int
xfs_attr3_node_inactive( xfs_attr3_node_inactive(
struct xfs_trans **trans, struct xfs_trans **trans,
struct xfs_inode *dp, struct xfs_inode *dp,
struct xfs_buf *bp, struct xfs_buf *bp,
int level) int level)
{ {
xfs_da_blkinfo_t *info; struct xfs_mount *mp = dp->i_mount;
xfs_da_intnode_t *node; struct xfs_da_blkinfo *info;
xfs_dablk_t child_fsb; xfs_dablk_t child_fsb;
xfs_daddr_t parent_blkno, child_blkno; xfs_daddr_t parent_blkno, child_blkno;
int error, i; struct xfs_buf *child_bp;
struct xfs_buf *child_bp;
struct xfs_da_node_entry *btree;
struct xfs_da3_icnode_hdr ichdr; struct xfs_da3_icnode_hdr ichdr;
int error, i;
/* /*
* Since this code is recursive (gasp!) we must protect ourselves. * Since this code is recursive (gasp!) we must protect ourselves.
*/ */
if (level > XFS_DA_NODE_MAXDEPTH) { if (level > XFS_DA_NODE_MAXDEPTH) {
xfs_trans_brelse(*trans, bp); /* no locks for later trans */ xfs_trans_brelse(*trans, bp); /* no locks for later trans */
return -EIO; xfs_buf_corruption_error(bp);
return -EFSCORRUPTED;
} }
node = bp->b_addr; xfs_da3_node_hdr_from_disk(dp->i_mount, &ichdr, bp->b_addr);
dp->d_ops->node_hdr_from_disk(&ichdr, node);
parent_blkno = bp->b_bn; parent_blkno = bp->b_bn;
if (!ichdr.count) { if (!ichdr.count) {
xfs_trans_brelse(*trans, bp); xfs_trans_brelse(*trans, bp);
return 0; return 0;
} }
btree = dp->d_ops->node_tree_p(node); child_fsb = be32_to_cpu(ichdr.btree[0].before);
child_fsb = be32_to_cpu(btree[0].before);
xfs_trans_brelse(*trans, bp); /* no locks for later trans */ xfs_trans_brelse(*trans, bp); /* no locks for later trans */
/* /*
@ -235,7 +234,7 @@ xfs_attr3_node_inactive(
* traversal of the tree so we may deal with many blocks * traversal of the tree so we may deal with many blocks
* before we come back to this one. * before we come back to this one.
*/ */
error = xfs_da3_node_read(*trans, dp, child_fsb, -1, &child_bp, error = xfs_da3_node_read(*trans, dp, child_fsb, &child_bp,
XFS_ATTR_FORK); XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
@ -258,8 +257,9 @@ xfs_attr3_node_inactive(
error = xfs_attr3_leaf_inactive(trans, dp, child_bp); error = xfs_attr3_leaf_inactive(trans, dp, child_bp);
break; break;
default: default:
error = -EIO; xfs_buf_corruption_error(child_bp);
xfs_trans_brelse(*trans, child_bp); xfs_trans_brelse(*trans, child_bp);
error = -EFSCORRUPTED;
break; break;
} }
if (error) if (error)
@ -268,10 +268,16 @@ xfs_attr3_node_inactive(
/* /*
* Remove the subsidiary block from the cache and from the log. * Remove the subsidiary block from the cache and from the log.
*/ */
error = xfs_da_get_buf(*trans, dp, 0, child_blkno, &child_bp, child_bp = xfs_trans_get_buf(*trans, mp->m_ddev_targp,
XFS_ATTR_FORK); child_blkno,
if (error) XFS_FSB_TO_BB(mp, mp->m_attr_geo->fsbcount), 0);
if (!child_bp)
return -EIO;
error = bp->b_error;
if (error) {
xfs_trans_brelse(*trans, child_bp);
return error; return error;
}
xfs_trans_binval(*trans, child_bp); xfs_trans_binval(*trans, child_bp);
/* /*
@ -279,13 +285,15 @@ xfs_attr3_node_inactive(
* child block number. * child block number.
*/ */
if (i + 1 < ichdr.count) { if (i + 1 < ichdr.count) {
error = xfs_da3_node_read(*trans, dp, 0, parent_blkno, struct xfs_da3_icnode_hdr phdr;
&bp, XFS_ATTR_FORK);
error = xfs_da3_node_read_mapped(*trans, dp,
parent_blkno, &bp, XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
node = bp->b_addr; xfs_da3_node_hdr_from_disk(dp->i_mount, &phdr,
btree = dp->d_ops->node_tree_p(node); bp->b_addr);
child_fsb = be32_to_cpu(btree[i + 1].before); child_fsb = be32_to_cpu(phdr.btree[i + 1].before);
xfs_trans_brelse(*trans, bp); xfs_trans_brelse(*trans, bp);
} }
/* /*
@ -310,6 +318,7 @@ xfs_attr3_root_inactive(
struct xfs_trans **trans, struct xfs_trans **trans,
struct xfs_inode *dp) struct xfs_inode *dp)
{ {
struct xfs_mount *mp = dp->i_mount;
struct xfs_da_blkinfo *info; struct xfs_da_blkinfo *info;
struct xfs_buf *bp; struct xfs_buf *bp;
xfs_daddr_t blkno; xfs_daddr_t blkno;
@ -321,7 +330,7 @@ xfs_attr3_root_inactive(
* the extents in reverse order the extent containing * the extents in reverse order the extent containing
* block 0 must still be there. * block 0 must still be there.
*/ */
error = xfs_da3_node_read(*trans, dp, 0, -1, &bp, XFS_ATTR_FORK); error = xfs_da3_node_read(*trans, dp, 0, &bp, XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
blkno = bp->b_bn; blkno = bp->b_bn;
@ -341,7 +350,8 @@ xfs_attr3_root_inactive(
error = xfs_attr3_leaf_inactive(trans, dp, bp); error = xfs_attr3_leaf_inactive(trans, dp, bp);
break; break;
default: default:
error = -EIO; error = -EFSCORRUPTED;
xfs_buf_corruption_error(bp);
xfs_trans_brelse(*trans, bp); xfs_trans_brelse(*trans, bp);
break; break;
} }
@ -351,9 +361,15 @@ xfs_attr3_root_inactive(
/* /*
* Invalidate the incore copy of the root block. * Invalidate the incore copy of the root block.
*/ */
error = xfs_da_get_buf(*trans, dp, 0, blkno, &bp, XFS_ATTR_FORK); bp = xfs_trans_get_buf(*trans, mp->m_ddev_targp, blkno,
if (error) XFS_FSB_TO_BB(mp, mp->m_attr_geo->fsbcount), 0);
if (!bp)
return -EIO;
error = bp->b_error;
if (error) {
xfs_trans_brelse(*trans, bp);
return error; return error;
}
xfs_trans_binval(*trans, bp); /* remove from cache */ xfs_trans_binval(*trans, bp); /* remove from cache */
/* /*
* Commit the invalidate and start the next transaction. * Commit the invalidate and start the next transaction.

View File

@ -49,14 +49,16 @@ xfs_attr_shortform_compare(const void *a, const void *b)
* we can begin returning them to the user. * we can begin returning them to the user.
*/ */
static int static int
xfs_attr_shortform_list(xfs_attr_list_context_t *context) xfs_attr_shortform_list(
struct xfs_attr_list_context *context)
{ {
attrlist_cursor_kern_t *cursor; struct attrlist_cursor_kern *cursor;
xfs_attr_sf_sort_t *sbuf, *sbp; struct xfs_attr_sf_sort *sbuf, *sbp;
xfs_attr_shortform_t *sf; struct xfs_attr_shortform *sf;
xfs_attr_sf_entry_t *sfe; struct xfs_attr_sf_entry *sfe;
xfs_inode_t *dp; struct xfs_inode *dp;
int sbsize, nsbuf, count, i; int sbsize, nsbuf, count, i;
int error = 0;
ASSERT(context != NULL); ASSERT(context != NULL);
dp = context->dp; dp = context->dp;
@ -84,6 +86,10 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
(XFS_ISRESET_CURSOR(cursor) && (XFS_ISRESET_CURSOR(cursor) &&
(dp->i_afp->if_bytes + sf->hdr.count * 16) < context->bufsize)) { (dp->i_afp->if_bytes + sf->hdr.count * 16) < context->bufsize)) {
for (i = 0, sfe = &sf->list[0]; i < sf->hdr.count; i++) { for (i = 0, sfe = &sf->list[0]; i < sf->hdr.count; i++) {
if (XFS_IS_CORRUPT(context->dp->i_mount,
!xfs_attr_namecheck(sfe->nameval,
sfe->namelen)))
return -EFSCORRUPTED;
context->put_listent(context, context->put_listent(context,
sfe->flags, sfe->flags,
sfe->nameval, sfe->nameval,
@ -161,10 +167,8 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
break; break;
} }
} }
if (i == nsbuf) { if (i == nsbuf)
kmem_free(sbuf); goto out;
return 0;
}
/* /*
* Loop putting entries into the user buffer. * Loop putting entries into the user buffer.
@ -174,6 +178,12 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
cursor->hashval = sbp->hash; cursor->hashval = sbp->hash;
cursor->offset = 0; cursor->offset = 0;
} }
if (XFS_IS_CORRUPT(context->dp->i_mount,
!xfs_attr_namecheck(sbp->name,
sbp->namelen))) {
error = -EFSCORRUPTED;
goto out;
}
context->put_listent(context, context->put_listent(context,
sbp->flags, sbp->flags,
sbp->name, sbp->name,
@ -183,9 +193,9 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
break; break;
cursor->offset++; cursor->offset++;
} }
out:
kmem_free(sbuf); kmem_free(sbuf);
return 0; return error;
} }
/* /*
@ -213,7 +223,7 @@ xfs_attr_node_list_lookup(
ASSERT(*pbp == NULL); ASSERT(*pbp == NULL);
cursor->blkno = 0; cursor->blkno = 0;
for (;;) { for (;;) {
error = xfs_da3_node_read(tp, dp, cursor->blkno, -1, &bp, error = xfs_da3_node_read(tp, dp, cursor->blkno, &bp,
XFS_ATTR_FORK); XFS_ATTR_FORK);
if (error) if (error)
return error; return error;
@ -229,7 +239,7 @@ xfs_attr_node_list_lookup(
goto out_corruptbuf; goto out_corruptbuf;
} }
dp->d_ops->node_hdr_from_disk(&nodehdr, node); xfs_da3_node_hdr_from_disk(mp, &nodehdr, node);
/* Tree taller than we can handle; bail out! */ /* Tree taller than we can handle; bail out! */
if (nodehdr.level >= XFS_DA_NODE_MAXDEPTH) if (nodehdr.level >= XFS_DA_NODE_MAXDEPTH)
@ -243,7 +253,7 @@ xfs_attr_node_list_lookup(
else else
expected_level--; expected_level--;
btree = dp->d_ops->node_tree_p(node); btree = nodehdr.btree;
for (i = 0; i < nodehdr.count; btree++, i++) { for (i = 0; i < nodehdr.count; btree++, i++) {
if (cursor->hashval <= be32_to_cpu(btree->hashval)) { if (cursor->hashval <= be32_to_cpu(btree->hashval)) {
cursor->blkno = be32_to_cpu(btree->before); cursor->blkno = be32_to_cpu(btree->before);
@ -258,7 +268,7 @@ xfs_attr_node_list_lookup(
return 0; return 0;
/* We can't point back to the root. */ /* We can't point back to the root. */
if (cursor->blkno == 0) if (XFS_IS_CORRUPT(mp, cursor->blkno == 0))
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -269,6 +279,7 @@ xfs_attr_node_list_lookup(
return 0; return 0;
out_corruptbuf: out_corruptbuf:
xfs_buf_corruption_error(bp);
xfs_trans_brelse(tp, bp); xfs_trans_brelse(tp, bp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -284,7 +295,7 @@ xfs_attr_node_list(
struct xfs_buf *bp; struct xfs_buf *bp;
struct xfs_inode *dp = context->dp; struct xfs_inode *dp = context->dp;
struct xfs_mount *mp = dp->i_mount; struct xfs_mount *mp = dp->i_mount;
int error; int error = 0;
trace_xfs_attr_node_list(context); trace_xfs_attr_node_list(context);
@ -298,8 +309,8 @@ xfs_attr_node_list(
*/ */
bp = NULL; bp = NULL;
if (cursor->blkno > 0) { if (cursor->blkno > 0) {
error = xfs_da3_node_read(context->tp, dp, cursor->blkno, -1, error = xfs_da3_node_read(context->tp, dp, cursor->blkno, &bp,
&bp, XFS_ATTR_FORK); XFS_ATTR_FORK);
if ((error != 0) && (error != -EFSCORRUPTED)) if ((error != 0) && (error != -EFSCORRUPTED))
return error; return error;
if (bp) { if (bp) {
@ -358,24 +369,27 @@ xfs_attr_node_list(
*/ */
for (;;) { for (;;) {
leaf = bp->b_addr; leaf = bp->b_addr;
xfs_attr3_leaf_list_int(bp, context); error = xfs_attr3_leaf_list_int(bp, context);
if (error)
break;
xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &leafhdr, leaf); xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &leafhdr, leaf);
if (context->seen_enough || leafhdr.forw == 0) if (context->seen_enough || leafhdr.forw == 0)
break; break;
cursor->blkno = leafhdr.forw; cursor->blkno = leafhdr.forw;
xfs_trans_brelse(context->tp, bp); xfs_trans_brelse(context->tp, bp);
error = xfs_attr3_leaf_read(context->tp, dp, cursor->blkno, -1, &bp); error = xfs_attr3_leaf_read(context->tp, dp, cursor->blkno,
&bp);
if (error) if (error)
return error; return error;
} }
xfs_trans_brelse(context->tp, bp); xfs_trans_brelse(context->tp, bp);
return 0; return error;
} }
/* /*
* Copy out attribute list entries for attr_list(), for leaf attribute lists. * Copy out attribute list entries for attr_list(), for leaf attribute lists.
*/ */
void int
xfs_attr3_leaf_list_int( xfs_attr3_leaf_list_int(
struct xfs_buf *bp, struct xfs_buf *bp,
struct xfs_attr_list_context *context) struct xfs_attr_list_context *context)
@ -417,7 +431,7 @@ xfs_attr3_leaf_list_int(
} }
if (i == ichdr.count) { if (i == ichdr.count) {
trace_xfs_attr_list_notfound(context); trace_xfs_attr_list_notfound(context);
return; return 0;
} }
} else { } else {
entry = &entries[0]; entry = &entries[0];
@ -457,6 +471,9 @@ xfs_attr3_leaf_list_int(
valuelen = be32_to_cpu(name_rmt->valuelen); valuelen = be32_to_cpu(name_rmt->valuelen);
} }
if (XFS_IS_CORRUPT(context->dp->i_mount,
!xfs_attr_namecheck(name, namelen)))
return -EFSCORRUPTED;
context->put_listent(context, entry->flags, context->put_listent(context, entry->flags,
name, namelen, valuelen); name, namelen, valuelen);
if (context->seen_enough) if (context->seen_enough)
@ -464,7 +481,7 @@ xfs_attr3_leaf_list_int(
cursor->offset++; cursor->offset++;
} }
trace_xfs_attr_list_leaf_end(context); trace_xfs_attr_list_leaf_end(context);
return; return 0;
} }
/* /*
@ -479,13 +496,13 @@ xfs_attr_leaf_list(xfs_attr_list_context_t *context)
trace_xfs_attr_leaf_list(context); trace_xfs_attr_leaf_list(context);
context->cursor->blkno = 0; context->cursor->blkno = 0;
error = xfs_attr3_leaf_read(context->tp, context->dp, 0, -1, &bp); error = xfs_attr3_leaf_read(context->tp, context->dp, 0, &bp);
if (error) if (error)
return error; return error;
xfs_attr3_leaf_list_int(bp, context); error = xfs_attr3_leaf_list_int(bp, context);
xfs_trans_brelse(context->tp, bp); xfs_trans_brelse(context->tp, bp);
return 0; return error;
} }
int int

View File

@ -21,7 +21,7 @@
#include "xfs_icache.h" #include "xfs_icache.h"
#include "xfs_bmap_btree.h" #include "xfs_bmap_btree.h"
#include "xfs_trans_space.h" #include "xfs_trans_space.h"
#include "xfs_error.h"
kmem_zone_t *xfs_bui_zone; kmem_zone_t *xfs_bui_zone;
kmem_zone_t *xfs_bud_zone; kmem_zone_t *xfs_bud_zone;
@ -35,7 +35,7 @@ void
xfs_bui_item_free( xfs_bui_item_free(
struct xfs_bui_log_item *buip) struct xfs_bui_log_item *buip)
{ {
kmem_zone_free(xfs_bui_zone, buip); kmem_cache_free(xfs_bui_zone, buip);
} }
/* /*
@ -201,7 +201,7 @@ xfs_bud_item_release(
struct xfs_bud_log_item *budp = BUD_ITEM(lip); struct xfs_bud_log_item *budp = BUD_ITEM(lip);
xfs_bui_release(budp->bud_buip); xfs_bui_release(budp->bud_buip);
kmem_zone_free(xfs_bud_zone, budp); kmem_cache_free(xfs_bud_zone, budp);
} }
static const struct xfs_item_ops xfs_bud_item_ops = { static const struct xfs_item_ops xfs_bud_item_ops = {
@ -456,7 +456,7 @@ xfs_bui_recover(
if (buip->bui_format.bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) { if (buip->bui_format.bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) {
set_bit(XFS_BUI_RECOVERED, &buip->bui_flags); set_bit(XFS_BUI_RECOVERED, &buip->bui_flags);
xfs_bui_release(buip); xfs_bui_release(buip);
return -EIO; return -EFSCORRUPTED;
} }
/* /*
@ -490,7 +490,7 @@ xfs_bui_recover(
*/ */
set_bit(XFS_BUI_RECOVERED, &buip->bui_flags); set_bit(XFS_BUI_RECOVERED, &buip->bui_flags);
xfs_bui_release(buip); xfs_bui_release(buip);
return -EIO; return -EFSCORRUPTED;
} }
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate, error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate,
@ -525,6 +525,7 @@ xfs_bui_recover(
type = bui_type; type = bui_type;
break; break;
default: default:
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
goto err_inode; goto err_inode;
} }

View File

@ -53,15 +53,16 @@ xfs_fsb_to_db(struct xfs_inode *ip, xfs_fsblock_t fsb)
*/ */
int int
xfs_zero_extent( xfs_zero_extent(
struct xfs_inode *ip, struct xfs_inode *ip,
xfs_fsblock_t start_fsb, xfs_fsblock_t start_fsb,
xfs_off_t count_fsb) xfs_off_t count_fsb)
{ {
struct xfs_mount *mp = ip->i_mount; struct xfs_mount *mp = ip->i_mount;
xfs_daddr_t sector = xfs_fsb_to_db(ip, start_fsb); struct xfs_buftarg *target = xfs_inode_buftarg(ip);
sector_t block = XFS_BB_TO_FSBT(mp, sector); xfs_daddr_t sector = xfs_fsb_to_db(ip, start_fsb);
sector_t block = XFS_BB_TO_FSBT(mp, sector);
return blkdev_issue_zeroout(xfs_find_bdev_for_inode(VFS_I(ip)), return blkdev_issue_zeroout(target->bt_bdev,
block << (mp->m_super->s_blocksize_bits - 9), block << (mp->m_super->s_blocksize_bits - 9),
count_fsb << (mp->m_super->s_blocksize_bits - 9), count_fsb << (mp->m_super->s_blocksize_bits - 9),
GFP_NOFS, 0); GFP_NOFS, 0);
@ -164,13 +165,6 @@ xfs_bmap_rtalloc(
xfs_trans_mod_dquot_byino(ap->tp, ap->ip, xfs_trans_mod_dquot_byino(ap->tp, ap->ip,
ap->wasdel ? XFS_TRANS_DQ_DELRTBCOUNT : ap->wasdel ? XFS_TRANS_DQ_DELRTBCOUNT :
XFS_TRANS_DQ_RTBCOUNT, (long) ralen); XFS_TRANS_DQ_RTBCOUNT, (long) ralen);
/* Zero the extent if we were asked to do so */
if (ap->datatype & XFS_ALLOC_USERDATA_ZERO) {
error = xfs_zero_extent(ap->ip, ap->blkno, ap->length);
if (error)
return error;
}
} else { } else {
ap->length = 0; ap->length = 0;
} }
@ -178,29 +172,6 @@ xfs_bmap_rtalloc(
} }
#endif /* CONFIG_XFS_RT */ #endif /* CONFIG_XFS_RT */
/*
* Check if the endoff is outside the last extent. If so the caller will grow
* the allocation to a stripe unit boundary. All offsets are considered outside
* the end of file for an empty fork, so 1 is returned in *eof in that case.
*/
int
xfs_bmap_eof(
struct xfs_inode *ip,
xfs_fileoff_t endoff,
int whichfork,
int *eof)
{
struct xfs_bmbt_irec rec;
int error;
error = xfs_bmap_last_extent(NULL, ip, whichfork, &rec, eof);
if (error || *eof)
return error;
*eof = endoff >= rec.br_startoff + rec.br_blockcount;
return 0;
}
/* /*
* Extent tree block counting routines. * Extent tree block counting routines.
*/ */
@ -228,106 +199,6 @@ xfs_bmap_count_leaves(
return numrecs; return numrecs;
} }
/*
* Count leaf blocks given a range of extent records originally
* in btree format.
*/
STATIC void
xfs_bmap_disk_count_leaves(
struct xfs_mount *mp,
struct xfs_btree_block *block,
int numrecs,
xfs_filblks_t *count)
{
int b;
xfs_bmbt_rec_t *frp;
for (b = 1; b <= numrecs; b++) {
frp = XFS_BMBT_REC_ADDR(mp, block, b);
*count += xfs_bmbt_disk_get_blockcount(frp);
}
}
/*
* Recursively walks each level of a btree
* to count total fsblocks in use.
*/
STATIC int
xfs_bmap_count_tree(
struct xfs_mount *mp,
struct xfs_trans *tp,
struct xfs_ifork *ifp,
xfs_fsblock_t blockno,
int levelin,
xfs_extnum_t *nextents,
xfs_filblks_t *count)
{
int error;
struct xfs_buf *bp, *nbp;
int level = levelin;
__be64 *pp;
xfs_fsblock_t bno = blockno;
xfs_fsblock_t nextbno;
struct xfs_btree_block *block, *nextblock;
int numrecs;
error = xfs_btree_read_bufl(mp, tp, bno, &bp, XFS_BMAP_BTREE_REF,
&xfs_bmbt_buf_ops);
if (error)
return error;
*count += 1;
block = XFS_BUF_TO_BLOCK(bp);
if (--level) {
/* Not at node above leaves, count this level of nodes */
nextbno = be64_to_cpu(block->bb_u.l.bb_rightsib);
while (nextbno != NULLFSBLOCK) {
error = xfs_btree_read_bufl(mp, tp, nextbno, &nbp,
XFS_BMAP_BTREE_REF,
&xfs_bmbt_buf_ops);
if (error)
return error;
*count += 1;
nextblock = XFS_BUF_TO_BLOCK(nbp);
nextbno = be64_to_cpu(nextblock->bb_u.l.bb_rightsib);
xfs_trans_brelse(tp, nbp);
}
/* Dive to the next level */
pp = XFS_BMBT_PTR_ADDR(mp, block, 1, mp->m_bmap_dmxr[1]);
bno = be64_to_cpu(*pp);
error = xfs_bmap_count_tree(mp, tp, ifp, bno, level, nextents,
count);
if (error) {
xfs_trans_brelse(tp, bp);
XFS_ERROR_REPORT("xfs_bmap_count_tree(1)",
XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED;
}
xfs_trans_brelse(tp, bp);
} else {
/* count all level 1 nodes and their leaves */
for (;;) {
nextbno = be64_to_cpu(block->bb_u.l.bb_rightsib);
numrecs = be16_to_cpu(block->bb_numrecs);
(*nextents) += numrecs;
xfs_bmap_disk_count_leaves(mp, block, numrecs, count);
xfs_trans_brelse(tp, bp);
if (nextbno == NULLFSBLOCK)
break;
bno = nextbno;
error = xfs_btree_read_bufl(mp, tp, bno, &bp,
XFS_BMAP_BTREE_REF,
&xfs_bmbt_buf_ops);
if (error)
return error;
*count += 1;
block = XFS_BUF_TO_BLOCK(bp);
}
}
return 0;
}
/* /*
* Count fsblocks of the given fork. Delayed allocation extents are * Count fsblocks of the given fork. Delayed allocation extents are
* not counted towards the totals. * not counted towards the totals.
@ -340,26 +211,19 @@ xfs_bmap_count_blocks(
xfs_extnum_t *nextents, xfs_extnum_t *nextents,
xfs_filblks_t *count) xfs_filblks_t *count)
{ {
struct xfs_mount *mp; /* file system mount structure */ struct xfs_mount *mp = ip->i_mount;
__be64 *pp; /* pointer to block address */ struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
struct xfs_btree_block *block; /* current btree block */ struct xfs_btree_cur *cur;
struct xfs_ifork *ifp; /* fork structure */ xfs_extlen_t btblocks = 0;
xfs_fsblock_t bno; /* block # of "block" */
int level; /* btree level, for checking */
int error; int error;
bno = NULLFSBLOCK;
mp = ip->i_mount;
*nextents = 0; *nextents = 0;
*count = 0; *count = 0;
ifp = XFS_IFORK_PTR(ip, whichfork);
if (!ifp) if (!ifp)
return 0; return 0;
switch (XFS_IFORK_FORMAT(ip, whichfork)) { switch (XFS_IFORK_FORMAT(ip, whichfork)) {
case XFS_DINODE_FMT_EXTENTS:
*nextents = xfs_bmap_count_leaves(ifp, count);
return 0;
case XFS_DINODE_FMT_BTREE: case XFS_DINODE_FMT_BTREE:
if (!(ifp->if_flags & XFS_IFEXTENTS)) { if (!(ifp->if_flags & XFS_IFEXTENTS)) {
error = xfs_iread_extents(tp, ip, whichfork); error = xfs_iread_extents(tp, ip, whichfork);
@ -367,26 +231,23 @@ xfs_bmap_count_blocks(
return error; return error;
} }
/* cur = xfs_bmbt_init_cursor(mp, tp, ip, whichfork);
* Root level must use BMAP_BROOT_PTR_ADDR macro to get ptr out. error = xfs_btree_count_blocks(cur, &btblocks);
*/ xfs_btree_del_cursor(cur, error);
block = ifp->if_broot; if (error)
level = be16_to_cpu(block->bb_level); return error;
ASSERT(level > 0);
pp = XFS_BMAP_BROOT_PTR_ADDR(mp, block, 1, ifp->if_broot_bytes);
bno = be64_to_cpu(*pp);
ASSERT(bno != NULLFSBLOCK);
ASSERT(XFS_FSB_TO_AGNO(mp, bno) < mp->m_sb.sb_agcount);
ASSERT(XFS_FSB_TO_AGBNO(mp, bno) < mp->m_sb.sb_agblocks);
error = xfs_bmap_count_tree(mp, tp, ifp, bno, level, /*
nextents, count); * xfs_btree_count_blocks includes the root block contained in
if (error) { * the inode fork in @btblocks, so subtract one because we're
XFS_ERROR_REPORT("xfs_bmap_count_blocks(2)", * only interested in allocated disk blocks.
XFS_ERRLEVEL_LOW, mp); */
return -EFSCORRUPTED; *count += btblocks - 1;
}
return 0; /* fall through */
case XFS_DINODE_FMT_EXTENTS:
*nextents = xfs_bmap_count_leaves(ifp, count);
break;
} }
return 0; return 0;
@ -964,8 +825,8 @@ xfs_alloc_file_space(
xfs_trans_ijoin(tp, ip, 0); xfs_trans_ijoin(tp, ip, 0);
error = xfs_bmapi_write(tp, ip, startoffset_fsb, error = xfs_bmapi_write(tp, ip, startoffset_fsb,
allocatesize_fsb, alloc_type, resblks, allocatesize_fsb, alloc_type, 0, imapp,
imapp, &nimaps); &nimaps);
if (error) if (error)
goto error0; goto error0;
@ -1039,6 +900,7 @@ out_trans_cancel:
goto out_unlock; goto out_unlock;
} }
/* Caller must first wait for the completion of any pending DIOs if required. */
int int
xfs_flush_unmap_range( xfs_flush_unmap_range(
struct xfs_inode *ip, struct xfs_inode *ip,
@ -1050,9 +912,6 @@ xfs_flush_unmap_range(
xfs_off_t rounding, start, end; xfs_off_t rounding, start, end;
int error; int error;
/* wait for the completion of any pending DIOs */
inode_dio_wait(inode);
rounding = max_t(xfs_off_t, 1 << mp->m_sb.sb_blocklog, PAGE_SIZE); rounding = max_t(xfs_off_t, 1 << mp->m_sb.sb_blocklog, PAGE_SIZE);
start = round_down(offset, rounding); start = round_down(offset, rounding);
end = round_up(offset + len, rounding) - 1; end = round_up(offset + len, rounding) - 1;
@ -1084,10 +943,6 @@ xfs_free_file_space(
if (len <= 0) /* if nothing being freed */ if (len <= 0) /* if nothing being freed */
return 0; return 0;
error = xfs_flush_unmap_range(ip, offset, len);
if (error)
return error;
startoffset_fsb = XFS_B_TO_FSB(mp, offset); startoffset_fsb = XFS_B_TO_FSB(mp, offset);
endoffset_fsb = XFS_B_TO_FSBT(mp, offset + len); endoffset_fsb = XFS_B_TO_FSBT(mp, offset + len);
@ -1113,7 +968,8 @@ xfs_free_file_space(
return 0; return 0;
if (offset + len > XFS_ISIZE(ip)) if (offset + len > XFS_ISIZE(ip))
len = XFS_ISIZE(ip) - offset; len = XFS_ISIZE(ip) - offset;
error = iomap_zero_range(VFS_I(ip), offset, len, NULL, &xfs_iomap_ops); error = iomap_zero_range(VFS_I(ip), offset, len, NULL,
&xfs_buffered_write_iomap_ops);
if (error) if (error)
return error; return error;
@ -1131,43 +987,6 @@ xfs_free_file_space(
return error; return error;
} }
/*
* Preallocate and zero a range of a file. This mechanism has the allocation
* semantics of fallocate and in addition converts data in the range to zeroes.
*/
int
xfs_zero_file_space(
struct xfs_inode *ip,
xfs_off_t offset,
xfs_off_t len)
{
struct xfs_mount *mp = ip->i_mount;
uint blksize;
int error;
trace_xfs_zero_file_space(ip);
blksize = 1 << mp->m_sb.sb_blocklog;
/*
* Punch a hole and prealloc the range. We use hole punch rather than
* unwritten extent conversion for two reasons:
*
* 1.) Hole punch handles partial block zeroing for us.
*
* 2.) If prealloc returns ENOSPC, the file range is still zero-valued
* by virtue of the hole punch.
*/
error = xfs_free_file_space(ip, offset, len);
if (error || xfs_is_always_cow_inode(ip))
return error;
return xfs_alloc_file_space(ip, round_down(offset, blksize),
round_up(offset + len, blksize) -
round_down(offset, blksize),
XFS_BMAPI_PREALLOC);
}
static int static int
xfs_prepare_shift( xfs_prepare_shift(
struct xfs_inode *ip, struct xfs_inode *ip,
@ -1750,6 +1569,14 @@ xfs_swap_extents(
goto out_unlock; goto out_unlock;
} }
error = xfs_qm_dqattach(ip);
if (error)
goto out_unlock;
error = xfs_qm_dqattach(tip);
if (error)
goto out_unlock;
error = xfs_swap_extent_flush(ip); error = xfs_swap_extent_flush(ip);
if (error) if (error)
goto out_unlock; goto out_unlock;

View File

@ -30,8 +30,6 @@ xfs_bmap_rtalloc(struct xfs_bmalloca *ap)
} }
#endif /* CONFIG_XFS_RT */ #endif /* CONFIG_XFS_RT */
int xfs_bmap_eof(struct xfs_inode *ip, xfs_fileoff_t endoff,
int whichfork, int *eof);
int xfs_bmap_punch_delalloc_range(struct xfs_inode *ip, int xfs_bmap_punch_delalloc_range(struct xfs_inode *ip,
xfs_fileoff_t start_fsb, xfs_fileoff_t length); xfs_fileoff_t start_fsb, xfs_fileoff_t length);
@ -59,8 +57,6 @@ int xfs_alloc_file_space(struct xfs_inode *ip, xfs_off_t offset,
xfs_off_t len, int alloc_type); xfs_off_t len, int alloc_type);
int xfs_free_file_space(struct xfs_inode *ip, xfs_off_t offset, int xfs_free_file_space(struct xfs_inode *ip, xfs_off_t offset,
xfs_off_t len); xfs_off_t len);
int xfs_zero_file_space(struct xfs_inode *ip, xfs_off_t offset,
xfs_off_t len);
int xfs_collapse_file_space(struct xfs_inode *, xfs_off_t offset, int xfs_collapse_file_space(struct xfs_inode *, xfs_off_t offset,
xfs_off_t len); xfs_off_t len);
int xfs_insert_file_space(struct xfs_inode *, xfs_off_t offset, int xfs_insert_file_space(struct xfs_inode *, xfs_off_t offset,

View File

@ -238,7 +238,7 @@ _xfs_buf_alloc(
*/ */
error = xfs_buf_get_maps(bp, nmaps); error = xfs_buf_get_maps(bp, nmaps);
if (error) { if (error) {
kmem_zone_free(xfs_buf_zone, bp); kmem_cache_free(xfs_buf_zone, bp);
return NULL; return NULL;
} }
@ -304,7 +304,7 @@ _xfs_buf_free_pages(
* The buffer must not be on any hash - use xfs_buf_rele instead for * The buffer must not be on any hash - use xfs_buf_rele instead for
* hashed and refcounted buffers * hashed and refcounted buffers
*/ */
void static void
xfs_buf_free( xfs_buf_free(
xfs_buf_t *bp) xfs_buf_t *bp)
{ {
@ -328,7 +328,7 @@ xfs_buf_free(
kmem_free(bp->b_addr); kmem_free(bp->b_addr);
_xfs_buf_free_pages(bp); _xfs_buf_free_pages(bp);
xfs_buf_free_maps(bp); xfs_buf_free_maps(bp);
kmem_zone_free(xfs_buf_zone, bp); kmem_cache_free(xfs_buf_zone, bp);
} }
/* /*
@ -461,7 +461,7 @@ _xfs_buf_map_pages(
unsigned nofs_flag; unsigned nofs_flag;
/* /*
* vm_map_ram() will allocate auxillary structures (e.g. * vm_map_ram() will allocate auxiliary structures (e.g.
* pagetables) with GFP_KERNEL, yet we are likely to be under * pagetables) with GFP_KERNEL, yet we are likely to be under
* GFP_NOFS context here. Hence we need to tell memory reclaim * GFP_NOFS context here. Hence we need to tell memory reclaim
* that we are in such a context via PF_MEMALLOC_NOFS to prevent * that we are in such a context via PF_MEMALLOC_NOFS to prevent
@ -949,7 +949,7 @@ xfs_buf_get_uncached(
_xfs_buf_free_pages(bp); _xfs_buf_free_pages(bp);
fail_free_buf: fail_free_buf:
xfs_buf_free_maps(bp); xfs_buf_free_maps(bp);
kmem_zone_free(xfs_buf_zone, bp); kmem_cache_free(xfs_buf_zone, bp);
fail: fail:
return NULL; return NULL;
} }
@ -1261,8 +1261,7 @@ xfs_buf_ioapply_map(
int map, int map,
int *buf_offset, int *buf_offset,
int *count, int *count,
int op, int op)
int op_flags)
{ {
int page_index; int page_index;
int total_nr_pages = bp->b_page_count; int total_nr_pages = bp->b_page_count;
@ -1297,7 +1296,7 @@ next_chunk:
bio->bi_iter.bi_sector = sector; bio->bi_iter.bi_sector = sector;
bio->bi_end_io = xfs_buf_bio_end_io; bio->bi_end_io = xfs_buf_bio_end_io;
bio->bi_private = bp; bio->bi_private = bp;
bio_set_op_attrs(bio, op, op_flags); bio->bi_opf = op;
for (; size && nr_pages; nr_pages--, page_index++) { for (; size && nr_pages; nr_pages--, page_index++) {
int rbytes, nbytes = PAGE_SIZE - offset; int rbytes, nbytes = PAGE_SIZE - offset;
@ -1342,7 +1341,6 @@ _xfs_buf_ioapply(
{ {
struct blk_plug plug; struct blk_plug plug;
int op; int op;
int op_flags = 0;
int offset; int offset;
int size; int size;
int i; int i;
@ -1384,15 +1382,14 @@ _xfs_buf_ioapply(
dump_stack(); dump_stack();
} }
} }
} else if (bp->b_flags & XBF_READ_AHEAD) {
op = REQ_OP_READ;
op_flags = REQ_RAHEAD;
} else { } else {
op = REQ_OP_READ; op = REQ_OP_READ;
if (bp->b_flags & XBF_READ_AHEAD)
op |= REQ_RAHEAD;
} }
/* we only use the buffer cache for meta-data */ /* we only use the buffer cache for meta-data */
op_flags |= REQ_META; op |= REQ_META;
/* /*
* Walk all the vectors issuing IO on them. Set up the initial offset * Walk all the vectors issuing IO on them. Set up the initial offset
@ -1404,7 +1401,7 @@ _xfs_buf_ioapply(
size = BBTOB(bp->b_length); size = BBTOB(bp->b_length);
blk_start_plug(&plug); blk_start_plug(&plug);
for (i = 0; i < bp->b_map_count; i++) { for (i = 0; i < bp->b_map_count; i++) {
xfs_buf_ioapply_map(bp, i, &offset, &size, op, op_flags); xfs_buf_ioapply_map(bp, i, &offset, &size, op);
if (bp->b_error) if (bp->b_error)
break; break;
if (size <= 0) if (size <= 0)
@ -2063,8 +2060,9 @@ xfs_buf_delwri_pushbuf(
int __init int __init
xfs_buf_init(void) xfs_buf_init(void)
{ {
xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", xfs_buf_zone = kmem_cache_create("xfs_buf",
KM_ZONE_HWALIGN, NULL); sizeof(struct xfs_buf), 0,
SLAB_HWCACHE_ALIGN, NULL);
if (!xfs_buf_zone) if (!xfs_buf_zone)
goto out; goto out;
@ -2077,7 +2075,7 @@ xfs_buf_init(void)
void void
xfs_buf_terminate(void) xfs_buf_terminate(void)
{ {
kmem_zone_destroy(xfs_buf_zone); kmem_cache_destroy(xfs_buf_zone);
} }
void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref) void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref)

View File

@ -244,7 +244,6 @@ int xfs_buf_read_uncached(struct xfs_buftarg *target, xfs_daddr_t daddr,
void xfs_buf_hold(struct xfs_buf *bp); void xfs_buf_hold(struct xfs_buf *bp);
/* Releasing Buffers */ /* Releasing Buffers */
extern void xfs_buf_free(xfs_buf_t *);
extern void xfs_buf_rele(xfs_buf_t *); extern void xfs_buf_rele(xfs_buf_t *);
/* Locking and Unlocking Buffers */ /* Locking and Unlocking Buffers */

View File

@ -763,7 +763,7 @@ xfs_buf_item_init(
error = xfs_buf_item_get_format(bip, bp->b_map_count); error = xfs_buf_item_get_format(bip, bp->b_map_count);
ASSERT(error == 0); ASSERT(error == 0);
if (error) { /* to stop gcc throwing set-but-unused warnings */ if (error) { /* to stop gcc throwing set-but-unused warnings */
kmem_zone_free(xfs_buf_item_zone, bip); kmem_cache_free(xfs_buf_item_zone, bip);
return error; return error;
} }
@ -851,7 +851,7 @@ xfs_buf_item_log_segment(
* first_bit and last_bit. * first_bit and last_bit.
*/ */
while ((bits_to_set - bits_set) >= NBWORD) { while ((bits_to_set - bits_set) >= NBWORD) {
*wordp |= 0xffffffff; *wordp = 0xffffffff;
bits_set += NBWORD; bits_set += NBWORD;
wordp++; wordp++;
} }
@ -939,7 +939,7 @@ xfs_buf_item_free(
{ {
xfs_buf_item_free_format(bip); xfs_buf_item_free_format(bip);
kmem_free(bip->bli_item.li_lv_shadow); kmem_free(bip->bli_item.li_lv_shadow);
kmem_zone_free(xfs_buf_item_zone, bip); kmem_cache_free(xfs_buf_item_zone, bip);
} }
/* /*

View File

@ -17,6 +17,7 @@
#include "xfs_trace.h" #include "xfs_trace.h"
#include "xfs_bmap.h" #include "xfs_bmap.h"
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_error.h"
/* /*
* Directory file type support functions * Directory file type support functions
@ -47,6 +48,7 @@ xfs_dir2_sf_getdents(
{ {
int i; /* shortform entry number */ int i; /* shortform entry number */
struct xfs_inode *dp = args->dp; /* incore directory inode */ struct xfs_inode *dp = args->dp; /* incore directory inode */
struct xfs_mount *mp = dp->i_mount;
xfs_dir2_dataptr_t off; /* current entry's offset */ xfs_dir2_dataptr_t off; /* current entry's offset */
xfs_dir2_sf_entry_t *sfep; /* shortform directory entry */ xfs_dir2_sf_entry_t *sfep; /* shortform directory entry */
xfs_dir2_sf_hdr_t *sfp; /* shortform structure */ xfs_dir2_sf_hdr_t *sfp; /* shortform structure */
@ -68,15 +70,15 @@ xfs_dir2_sf_getdents(
return 0; return 0;
/* /*
* Precalculate offsets for . and .. as we will always need them. * Precalculate offsets for "." and ".." as we will always need them.
* * This relies on the fact that directories always start with the
* XXX(hch): the second argument is sometimes 0 and sometimes * entries for "." and "..".
* geo->datablk
*/ */
dot_offset = xfs_dir2_db_off_to_dataptr(geo, geo->datablk, dot_offset = xfs_dir2_db_off_to_dataptr(geo, geo->datablk,
dp->d_ops->data_dot_offset); geo->data_entry_offset);
dotdot_offset = xfs_dir2_db_off_to_dataptr(geo, geo->datablk, dotdot_offset = xfs_dir2_db_off_to_dataptr(geo, geo->datablk,
dp->d_ops->data_dotdot_offset); geo->data_entry_offset +
xfs_dir2_data_entsize(mp, sizeof(".") - 1));
/* /*
* Put . entry unless we're starting past it. * Put . entry unless we're starting past it.
@ -91,7 +93,7 @@ xfs_dir2_sf_getdents(
* Put .. entry unless we're starting past it. * Put .. entry unless we're starting past it.
*/ */
if (ctx->pos <= dotdot_offset) { if (ctx->pos <= dotdot_offset) {
ino = dp->d_ops->sf_get_parent_ino(sfp); ino = xfs_dir2_sf_get_parent_ino(sfp);
ctx->pos = dotdot_offset & 0x7fffffff; ctx->pos = dotdot_offset & 0x7fffffff;
if (!dir_emit(ctx, "..", 2, ino, DT_DIR)) if (!dir_emit(ctx, "..", 2, ino, DT_DIR))
return 0; return 0;
@ -108,17 +110,21 @@ xfs_dir2_sf_getdents(
xfs_dir2_sf_get_offset(sfep)); xfs_dir2_sf_get_offset(sfep));
if (ctx->pos > off) { if (ctx->pos > off) {
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
continue; continue;
} }
ino = dp->d_ops->sf_get_ino(sfp, sfep); ino = xfs_dir2_sf_get_ino(mp, sfp, sfep);
filetype = dp->d_ops->sf_get_ftype(sfep); filetype = xfs_dir2_sf_get_ftype(mp, sfep);
ctx->pos = off & 0x7fffffff; ctx->pos = off & 0x7fffffff;
if (XFS_IS_CORRUPT(dp->i_mount,
!xfs_dir2_namecheck(sfep->name,
sfep->namelen)))
return -EFSCORRUPTED;
if (!dir_emit(ctx, (char *)sfep->name, sfep->namelen, ino, if (!dir_emit(ctx, (char *)sfep->name, sfep->namelen, ino,
xfs_dir3_get_dtype(dp->i_mount, filetype))) xfs_dir3_get_dtype(mp, filetype)))
return 0; return 0;
sfep = dp->d_ops->sf_nextentry(sfp, sfep); sfep = xfs_dir2_sf_nextentry(mp, sfp, sfep);
} }
ctx->pos = xfs_dir2_db_off_to_dataptr(geo, geo->datablk + 1, 0) & ctx->pos = xfs_dir2_db_off_to_dataptr(geo, geo->datablk + 1, 0) &
@ -135,17 +141,14 @@ xfs_dir2_block_getdents(
struct dir_context *ctx) struct dir_context *ctx)
{ {
struct xfs_inode *dp = args->dp; /* incore directory inode */ struct xfs_inode *dp = args->dp; /* incore directory inode */
xfs_dir2_data_hdr_t *hdr; /* block header */
struct xfs_buf *bp; /* buffer for block */ struct xfs_buf *bp; /* buffer for block */
xfs_dir2_data_entry_t *dep; /* block data entry */
xfs_dir2_data_unused_t *dup; /* block unused entry */
char *endptr; /* end of the data entries */
int error; /* error return value */ int error; /* error return value */
char *ptr; /* current data entry */
int wantoff; /* starting block offset */ int wantoff; /* starting block offset */
xfs_off_t cook; xfs_off_t cook;
struct xfs_da_geometry *geo = args->geo; struct xfs_da_geometry *geo = args->geo;
int lock_mode; int lock_mode;
unsigned int offset;
unsigned int end;
/* /*
* If the block number in the offset is out of range, we're done. * If the block number in the offset is out of range, we're done.
@ -164,56 +167,55 @@ xfs_dir2_block_getdents(
* We'll skip entries before this. * We'll skip entries before this.
*/ */
wantoff = xfs_dir2_dataptr_to_off(geo, ctx->pos); wantoff = xfs_dir2_dataptr_to_off(geo, ctx->pos);
hdr = bp->b_addr;
xfs_dir3_data_check(dp, bp); xfs_dir3_data_check(dp, bp);
/*
* Set up values for the loop.
*/
ptr = (char *)dp->d_ops->data_entry_p(hdr);
endptr = xfs_dir3_data_endp(geo, hdr);
/* /*
* Loop over the data portion of the block. * Loop over the data portion of the block.
* Each object is a real entry (dep) or an unused one (dup). * Each object is a real entry (dep) or an unused one (dup).
*/ */
while (ptr < endptr) { offset = geo->data_entry_offset;
end = xfs_dir3_data_end_offset(geo, bp->b_addr);
while (offset < end) {
struct xfs_dir2_data_unused *dup = bp->b_addr + offset;
struct xfs_dir2_data_entry *dep = bp->b_addr + offset;
uint8_t filetype; uint8_t filetype;
dup = (xfs_dir2_data_unused_t *)ptr;
/* /*
* Unused, skip it. * Unused, skip it.
*/ */
if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
ptr += be16_to_cpu(dup->length); offset += be16_to_cpu(dup->length);
continue; continue;
} }
dep = (xfs_dir2_data_entry_t *)ptr;
/* /*
* Bump pointer for the next iteration. * Bump pointer for the next iteration.
*/ */
ptr += dp->d_ops->data_entsize(dep->namelen); offset += xfs_dir2_data_entsize(dp->i_mount, dep->namelen);
/* /*
* The entry is before the desired starting point, skip it. * The entry is before the desired starting point, skip it.
*/ */
if ((char *)dep - (char *)hdr < wantoff) if (offset < wantoff)
continue; continue;
cook = xfs_dir2_db_off_to_dataptr(geo, geo->datablk, cook = xfs_dir2_db_off_to_dataptr(geo, geo->datablk, offset);
(char *)dep - (char *)hdr);
ctx->pos = cook & 0x7fffffff; ctx->pos = cook & 0x7fffffff;
filetype = dp->d_ops->data_get_ftype(dep); filetype = xfs_dir2_data_get_ftype(dp->i_mount, dep);
/* /*
* If it didn't fit, set the final offset to here & return. * If it didn't fit, set the final offset to here & return.
*/ */
if (XFS_IS_CORRUPT(dp->i_mount,
!xfs_dir2_namecheck(dep->name,
dep->namelen))) {
error = -EFSCORRUPTED;
goto out_rele;
}
if (!dir_emit(ctx, (char *)dep->name, dep->namelen, if (!dir_emit(ctx, (char *)dep->name, dep->namelen,
be64_to_cpu(dep->inumber), be64_to_cpu(dep->inumber),
xfs_dir3_get_dtype(dp->i_mount, filetype))) { xfs_dir3_get_dtype(dp->i_mount, filetype)))
xfs_trans_brelse(args->trans, bp); goto out_rele;
return 0;
}
} }
/* /*
@ -222,8 +224,9 @@ xfs_dir2_block_getdents(
*/ */
ctx->pos = xfs_dir2_db_off_to_dataptr(geo, geo->datablk + 1, 0) & ctx->pos = xfs_dir2_db_off_to_dataptr(geo, geo->datablk + 1, 0) &
0x7fffffff; 0x7fffffff;
out_rele:
xfs_trans_brelse(args->trans, bp); xfs_trans_brelse(args->trans, bp);
return 0; return error;
} }
/* /*
@ -276,7 +279,7 @@ xfs_dir2_leaf_readbuf(
new_off = xfs_dir2_da_to_byte(geo, map.br_startoff); new_off = xfs_dir2_da_to_byte(geo, map.br_startoff);
if (new_off > *cur_off) if (new_off > *cur_off)
*cur_off = new_off; *cur_off = new_off;
error = xfs_dir3_data_read(args->trans, dp, map.br_startoff, -1, &bp); error = xfs_dir3_data_read(args->trans, dp, map.br_startoff, 0, &bp);
if (error) if (error)
goto out; goto out;
@ -311,7 +314,8 @@ xfs_dir2_leaf_readbuf(
break; break;
} }
if (next_ra > *ra_blk) { if (next_ra > *ra_blk) {
xfs_dir3_data_readahead(dp, next_ra, -2); xfs_dir3_data_readahead(dp, next_ra,
XFS_DABUF_MAP_HOLE_OK);
*ra_blk = next_ra; *ra_blk = next_ra;
} }
ra_want -= geo->fsbcount; ra_want -= geo->fsbcount;
@ -343,17 +347,17 @@ xfs_dir2_leaf_getdents(
size_t bufsize) size_t bufsize)
{ {
struct xfs_inode *dp = args->dp; struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
struct xfs_buf *bp = NULL; /* data block buffer */ struct xfs_buf *bp = NULL; /* data block buffer */
xfs_dir2_data_hdr_t *hdr; /* data block header */
xfs_dir2_data_entry_t *dep; /* data entry */ xfs_dir2_data_entry_t *dep; /* data entry */
xfs_dir2_data_unused_t *dup; /* unused entry */ xfs_dir2_data_unused_t *dup; /* unused entry */
char *ptr = NULL; /* pointer to current data */
struct xfs_da_geometry *geo = args->geo; struct xfs_da_geometry *geo = args->geo;
xfs_dablk_t rablk = 0; /* current readahead block */ xfs_dablk_t rablk = 0; /* current readahead block */
xfs_dir2_off_t curoff; /* current overall offset */ xfs_dir2_off_t curoff; /* current overall offset */
int length; /* temporary length value */ int length; /* temporary length value */
int byteoff; /* offset in current block */ int byteoff; /* offset in current block */
int lock_mode; int lock_mode;
unsigned int offset = 0;
int error = 0; /* error return value */ int error = 0; /* error return value */
/* /*
@ -380,7 +384,7 @@ xfs_dir2_leaf_getdents(
* If we have no buffer, or we're off the end of the * If we have no buffer, or we're off the end of the
* current buffer, need to get another one. * current buffer, need to get another one.
*/ */
if (!bp || ptr >= (char *)bp->b_addr + geo->blksize) { if (!bp || offset >= geo->blksize) {
if (bp) { if (bp) {
xfs_trans_brelse(args->trans, bp); xfs_trans_brelse(args->trans, bp);
bp = NULL; bp = NULL;
@ -393,36 +397,35 @@ xfs_dir2_leaf_getdents(
if (error || !bp) if (error || !bp)
break; break;
hdr = bp->b_addr;
xfs_dir3_data_check(dp, bp); xfs_dir3_data_check(dp, bp);
/* /*
* Find our position in the block. * Find our position in the block.
*/ */
ptr = (char *)dp->d_ops->data_entry_p(hdr); offset = geo->data_entry_offset;
byteoff = xfs_dir2_byte_to_off(geo, curoff); byteoff = xfs_dir2_byte_to_off(geo, curoff);
/* /*
* Skip past the header. * Skip past the header.
*/ */
if (byteoff == 0) if (byteoff == 0)
curoff += dp->d_ops->data_entry_offset; curoff += geo->data_entry_offset;
/* /*
* Skip past entries until we reach our offset. * Skip past entries until we reach our offset.
*/ */
else { else {
while ((char *)ptr - (char *)hdr < byteoff) { while (offset < byteoff) {
dup = (xfs_dir2_data_unused_t *)ptr; dup = bp->b_addr + offset;
if (be16_to_cpu(dup->freetag) if (be16_to_cpu(dup->freetag)
== XFS_DIR2_DATA_FREE_TAG) { == XFS_DIR2_DATA_FREE_TAG) {
length = be16_to_cpu(dup->length); length = be16_to_cpu(dup->length);
ptr += length; offset += length;
continue; continue;
} }
dep = (xfs_dir2_data_entry_t *)ptr; dep = bp->b_addr + offset;
length = length = xfs_dir2_data_entsize(mp,
dp->d_ops->data_entsize(dep->namelen); dep->namelen);
ptr += length; offset += length;
} }
/* /*
* Now set our real offset. * Now set our real offset.
@ -430,32 +433,38 @@ xfs_dir2_leaf_getdents(
curoff = curoff =
xfs_dir2_db_off_to_byte(geo, xfs_dir2_db_off_to_byte(geo,
xfs_dir2_byte_to_db(geo, curoff), xfs_dir2_byte_to_db(geo, curoff),
(char *)ptr - (char *)hdr); offset);
if (ptr >= (char *)hdr + geo->blksize) { if (offset >= geo->blksize)
continue; continue;
}
} }
} }
/* /*
* We have a pointer to an entry. * We have a pointer to an entry. Is it a live one?
* Is it a live one?
*/ */
dup = (xfs_dir2_data_unused_t *)ptr; dup = bp->b_addr + offset;
/* /*
* No, it's unused, skip over it. * No, it's unused, skip over it.
*/ */
if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) { if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
length = be16_to_cpu(dup->length); length = be16_to_cpu(dup->length);
ptr += length; offset += length;
curoff += length; curoff += length;
continue; continue;
} }
dep = (xfs_dir2_data_entry_t *)ptr; dep = bp->b_addr + offset;
length = dp->d_ops->data_entsize(dep->namelen); length = xfs_dir2_data_entsize(mp, dep->namelen);
filetype = dp->d_ops->data_get_ftype(dep); filetype = xfs_dir2_data_get_ftype(mp, dep);
ctx->pos = xfs_dir2_byte_to_dataptr(curoff) & 0x7fffffff; ctx->pos = xfs_dir2_byte_to_dataptr(curoff) & 0x7fffffff;
if (XFS_IS_CORRUPT(dp->i_mount,
!xfs_dir2_namecheck(dep->name,
dep->namelen))) {
error = -EFSCORRUPTED;
break;
}
if (!dir_emit(ctx, (char *)dep->name, dep->namelen, if (!dir_emit(ctx, (char *)dep->name, dep->namelen,
be64_to_cpu(dep->inumber), be64_to_cpu(dep->inumber),
xfs_dir3_get_dtype(dp->i_mount, filetype))) xfs_dir3_get_dtype(dp->i_mount, filetype)))
@ -464,7 +473,7 @@ xfs_dir2_leaf_getdents(
/* /*
* Advance to next entry in the block. * Advance to next entry in the block.
*/ */
ptr += length; offset += length;
curoff += length; curoff += length;
/* bufsize may have just been a guess; don't go negative */ /* bufsize may have just been a guess; don't go negative */
bufsize = bufsize > length ? bufsize - length : 0; bufsize = bufsize > length ? bufsize - length : 0;

View File

@ -13,6 +13,7 @@
#include "xfs_btree.h" #include "xfs_btree.h"
#include "xfs_alloc_btree.h" #include "xfs_alloc_btree.h"
#include "xfs_alloc.h" #include "xfs_alloc.h"
#include "xfs_discard.h"
#include "xfs_error.h" #include "xfs_error.h"
#include "xfs_extent_busy.h" #include "xfs_extent_busy.h"
#include "xfs_trace.h" #include "xfs_trace.h"
@ -70,7 +71,10 @@ xfs_trim_extents(
error = xfs_alloc_get_rec(cur, &fbno, &flen, &i); error = xfs_alloc_get_rec(cur, &fbno, &flen, &i);
if (error) if (error)
goto out_del_cursor; goto out_del_cursor;
XFS_WANT_CORRUPTED_GOTO(mp, i == 1, out_del_cursor); if (XFS_IS_CORRUPT(mp, i != 1)) {
error = -EFSCORRUPTED;
goto out_del_cursor;
}
ASSERT(flen <= be32_to_cpu(XFS_BUF_TO_AGF(agbp)->agf_longest)); ASSERT(flen <= be32_to_cpu(XFS_BUF_TO_AGF(agbp)->agf_longest));
/* /*

View File

@ -48,7 +48,7 @@ static struct lock_class_key xfs_dquot_project_class;
*/ */
void void
xfs_qm_dqdestroy( xfs_qm_dqdestroy(
xfs_dquot_t *dqp) struct xfs_dquot *dqp)
{ {
ASSERT(list_empty(&dqp->q_lru)); ASSERT(list_empty(&dqp->q_lru));
@ -56,7 +56,7 @@ xfs_qm_dqdestroy(
mutex_destroy(&dqp->q_qlock); mutex_destroy(&dqp->q_qlock);
XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot); XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot);
kmem_zone_free(xfs_qm_dqzone, dqp); kmem_cache_free(xfs_qm_dqzone, dqp);
} }
/* /*
@ -113,8 +113,8 @@ xfs_qm_adjust_dqlimits(
*/ */
void void
xfs_qm_adjust_dqtimers( xfs_qm_adjust_dqtimers(
xfs_mount_t *mp, struct xfs_mount *mp,
xfs_disk_dquot_t *d) struct xfs_disk_dquot *d)
{ {
ASSERT(d->d_id); ASSERT(d->d_id);
@ -305,8 +305,8 @@ xfs_dquot_disk_alloc(
/* Create the block mapping. */ /* Create the block mapping. */
xfs_trans_ijoin(tp, quotip, XFS_ILOCK_EXCL); xfs_trans_ijoin(tp, quotip, XFS_ILOCK_EXCL);
error = xfs_bmapi_write(tp, quotip, dqp->q_fileoffset, error = xfs_bmapi_write(tp, quotip, dqp->q_fileoffset,
XFS_DQUOT_CLUSTER_SIZE_FSB, XFS_BMAPI_METADATA, XFS_DQUOT_CLUSTER_SIZE_FSB, XFS_BMAPI_METADATA, 0, &map,
XFS_QM_DQALLOC_SPACE_RES(mp), &map, &nmaps); &nmaps);
if (error) if (error)
return error; return error;
ASSERT(map.br_blockcount == XFS_DQUOT_CLUSTER_SIZE_FSB); ASSERT(map.br_blockcount == XFS_DQUOT_CLUSTER_SIZE_FSB);
@ -497,7 +497,7 @@ xfs_dquot_from_disk(
struct xfs_disk_dquot *ddqp = bp->b_addr + dqp->q_bufoffset; struct xfs_disk_dquot *ddqp = bp->b_addr + dqp->q_bufoffset;
/* copy everything from disk dquot to the incore dquot */ /* copy everything from disk dquot to the incore dquot */
memcpy(&dqp->q_core, ddqp, sizeof(xfs_disk_dquot_t)); memcpy(&dqp->q_core, ddqp, sizeof(struct xfs_disk_dquot));
/* /*
* Reservation counters are defined as reservation plus current usage * Reservation counters are defined as reservation plus current usage
@ -833,7 +833,7 @@ xfs_qm_id_for_quotatype(
case XFS_DQ_GROUP: case XFS_DQ_GROUP:
return ip->i_d.di_gid; return ip->i_d.di_gid;
case XFS_DQ_PROJ: case XFS_DQ_PROJ:
return xfs_get_projid(ip); return ip->i_d.di_projid;
} }
ASSERT(0); ASSERT(0);
return 0; return 0;
@ -989,7 +989,7 @@ xfs_qm_dqput(
*/ */
void void
xfs_qm_dqrele( xfs_qm_dqrele(
xfs_dquot_t *dqp) struct xfs_dquot *dqp)
{ {
if (!dqp) if (!dqp)
return; return;
@ -1018,8 +1018,8 @@ xfs_qm_dqflush_done(
struct xfs_buf *bp, struct xfs_buf *bp,
struct xfs_log_item *lip) struct xfs_log_item *lip)
{ {
xfs_dq_logitem_t *qip = (struct xfs_dq_logitem *)lip; struct xfs_dq_logitem *qip = (struct xfs_dq_logitem *)lip;
xfs_dquot_t *dqp = qip->qli_dquot; struct xfs_dquot *dqp = qip->qli_dquot;
struct xfs_ail *ailp = lip->li_ailp; struct xfs_ail *ailp = lip->li_ailp;
/* /*
@ -1126,11 +1126,11 @@ xfs_qm_dqflush(
xfs_buf_relse(bp); xfs_buf_relse(bp);
xfs_dqfunlock(dqp); xfs_dqfunlock(dqp);
xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
return -EIO; return -EFSCORRUPTED;
} }
/* This is the only portion of data that needs to persist */ /* This is the only portion of data that needs to persist */
memcpy(ddqp, &dqp->q_core, sizeof(xfs_disk_dquot_t)); memcpy(ddqp, &dqp->q_core, sizeof(struct xfs_disk_dquot));
/* /*
* Clear the dirty field and remember the flush lsn for later use. * Clear the dirty field and remember the flush lsn for later use.
@ -1188,8 +1188,8 @@ out_unlock:
*/ */
void void
xfs_dqlock2( xfs_dqlock2(
xfs_dquot_t *d1, struct xfs_dquot *d1,
xfs_dquot_t *d2) struct xfs_dquot *d2)
{ {
if (d1 && d2) { if (d1 && d2) {
ASSERT(d1 != d2); ASSERT(d1 != d2);
@ -1211,20 +1211,22 @@ xfs_dqlock2(
int __init int __init
xfs_qm_init(void) xfs_qm_init(void)
{ {
xfs_qm_dqzone = xfs_qm_dqzone = kmem_cache_create("xfs_dquot",
kmem_zone_init(sizeof(struct xfs_dquot), "xfs_dquot"); sizeof(struct xfs_dquot),
0, 0, NULL);
if (!xfs_qm_dqzone) if (!xfs_qm_dqzone)
goto out; goto out;
xfs_qm_dqtrxzone = xfs_qm_dqtrxzone = kmem_cache_create("xfs_dqtrx",
kmem_zone_init(sizeof(struct xfs_dquot_acct), "xfs_dqtrx"); sizeof(struct xfs_dquot_acct),
0, 0, NULL);
if (!xfs_qm_dqtrxzone) if (!xfs_qm_dqtrxzone)
goto out_free_dqzone; goto out_free_dqzone;
return 0; return 0;
out_free_dqzone: out_free_dqzone:
kmem_zone_destroy(xfs_qm_dqzone); kmem_cache_destroy(xfs_qm_dqzone);
out: out:
return -ENOMEM; return -ENOMEM;
} }
@ -1232,8 +1234,8 @@ out:
void void
xfs_qm_exit(void) xfs_qm_exit(void)
{ {
kmem_zone_destroy(xfs_qm_dqtrxzone); kmem_cache_destroy(xfs_qm_dqtrxzone);
kmem_zone_destroy(xfs_qm_dqzone); kmem_cache_destroy(xfs_qm_dqzone);
} }
/* /*

View File

@ -30,33 +30,36 @@ enum {
/* /*
* The incore dquot structure * The incore dquot structure
*/ */
typedef struct xfs_dquot { struct xfs_dquot {
uint dq_flags; /* various flags (XFS_DQ_*) */ uint dq_flags;
struct list_head q_lru; /* global free list of dquots */ struct list_head q_lru;
struct xfs_mount*q_mount; /* filesystem this relates to */ struct xfs_mount *q_mount;
uint q_nrefs; /* # active refs from inodes */ uint q_nrefs;
xfs_daddr_t q_blkno; /* blkno of dquot buffer */ xfs_daddr_t q_blkno;
int q_bufoffset; /* off of dq in buffer (# dquots) */ int q_bufoffset;
xfs_fileoff_t q_fileoffset; /* offset in quotas file */ xfs_fileoff_t q_fileoffset;
xfs_disk_dquot_t q_core; /* actual usage & quotas */ struct xfs_disk_dquot q_core;
xfs_dq_logitem_t q_logitem; /* dquot log item */ struct xfs_dq_logitem q_logitem;
xfs_qcnt_t q_res_bcount; /* total regular nblks used+reserved */ /* total regular nblks used+reserved */
xfs_qcnt_t q_res_icount; /* total inos allocd+reserved */ xfs_qcnt_t q_res_bcount;
xfs_qcnt_t q_res_rtbcount;/* total realtime blks used+reserved */ /* total inos allocd+reserved */
xfs_qcnt_t q_prealloc_lo_wmark;/* prealloc throttle wmark */ xfs_qcnt_t q_res_icount;
xfs_qcnt_t q_prealloc_hi_wmark;/* prealloc disabled wmark */ /* total realtime blks used+reserved */
int64_t q_low_space[XFS_QLOWSP_MAX]; xfs_qcnt_t q_res_rtbcount;
struct mutex q_qlock; /* quota lock */ xfs_qcnt_t q_prealloc_lo_wmark;
struct completion q_flush; /* flush completion queue */ xfs_qcnt_t q_prealloc_hi_wmark;
atomic_t q_pincount; /* dquot pin count */ int64_t q_low_space[XFS_QLOWSP_MAX];
wait_queue_head_t q_pinwait; /* dquot pinning wait queue */ struct mutex q_qlock;
} xfs_dquot_t; struct completion q_flush;
atomic_t q_pincount;
struct wait_queue_head q_pinwait;
};
/* /*
* Lock hierarchy for q_qlock: * Lock hierarchy for q_qlock:
* XFS_QLOCK_NORMAL is the implicit default, * XFS_QLOCK_NORMAL is the implicit default,
* XFS_QLOCK_NESTED is the dquot with the higher id in xfs_dqlock2 * XFS_QLOCK_NESTED is the dquot with the higher id in xfs_dqlock2
*/ */
enum { enum {
XFS_QLOCK_NORMAL = 0, XFS_QLOCK_NORMAL = 0,
@ -64,21 +67,21 @@ enum {
}; };
/* /*
* Manage the q_flush completion queue embedded in the dquot. This completion * Manage the q_flush completion queue embedded in the dquot. This completion
* queue synchronizes processes attempting to flush the in-core dquot back to * queue synchronizes processes attempting to flush the in-core dquot back to
* disk. * disk.
*/ */
static inline void xfs_dqflock(xfs_dquot_t *dqp) static inline void xfs_dqflock(struct xfs_dquot *dqp)
{ {
wait_for_completion(&dqp->q_flush); wait_for_completion(&dqp->q_flush);
} }
static inline bool xfs_dqflock_nowait(xfs_dquot_t *dqp) static inline bool xfs_dqflock_nowait(struct xfs_dquot *dqp)
{ {
return try_wait_for_completion(&dqp->q_flush); return try_wait_for_completion(&dqp->q_flush);
} }
static inline void xfs_dqfunlock(xfs_dquot_t *dqp) static inline void xfs_dqfunlock(struct xfs_dquot *dqp)
{ {
complete(&dqp->q_flush); complete(&dqp->q_flush);
} }
@ -112,7 +115,7 @@ static inline int xfs_this_quota_on(struct xfs_mount *mp, int type)
} }
} }
static inline xfs_dquot_t *xfs_inode_dquot(struct xfs_inode *ip, int type) static inline struct xfs_dquot *xfs_inode_dquot(struct xfs_inode *ip, int type)
{ {
switch (type & XFS_DQ_ALLTYPES) { switch (type & XFS_DQ_ALLTYPES) {
case XFS_DQ_USER: case XFS_DQ_USER:
@ -147,31 +150,30 @@ static inline bool xfs_dquot_lowsp(struct xfs_dquot *dqp)
#define XFS_QM_ISPDQ(dqp) ((dqp)->dq_flags & XFS_DQ_PROJ) #define XFS_QM_ISPDQ(dqp) ((dqp)->dq_flags & XFS_DQ_PROJ)
#define XFS_QM_ISGDQ(dqp) ((dqp)->dq_flags & XFS_DQ_GROUP) #define XFS_QM_ISGDQ(dqp) ((dqp)->dq_flags & XFS_DQ_GROUP)
extern void xfs_qm_dqdestroy(xfs_dquot_t *); void xfs_qm_dqdestroy(struct xfs_dquot *dqp);
extern int xfs_qm_dqflush(struct xfs_dquot *, struct xfs_buf **); int xfs_qm_dqflush(struct xfs_dquot *dqp, struct xfs_buf **bpp);
extern void xfs_qm_dqunpin_wait(xfs_dquot_t *); void xfs_qm_dqunpin_wait(struct xfs_dquot *dqp);
extern void xfs_qm_adjust_dqtimers(xfs_mount_t *, void xfs_qm_adjust_dqtimers(struct xfs_mount *mp,
xfs_disk_dquot_t *); struct xfs_disk_dquot *d);
extern void xfs_qm_adjust_dqlimits(struct xfs_mount *, void xfs_qm_adjust_dqlimits(struct xfs_mount *mp,
struct xfs_dquot *); struct xfs_dquot *d);
extern xfs_dqid_t xfs_qm_id_for_quotatype(struct xfs_inode *ip, xfs_dqid_t xfs_qm_id_for_quotatype(struct xfs_inode *ip, uint type);
uint type); int xfs_qm_dqget(struct xfs_mount *mp, xfs_dqid_t id,
extern int xfs_qm_dqget(struct xfs_mount *mp, xfs_dqid_t id,
uint type, bool can_alloc, uint type, bool can_alloc,
struct xfs_dquot **dqpp); struct xfs_dquot **dqpp);
extern int xfs_qm_dqget_inode(struct xfs_inode *ip, uint type, int xfs_qm_dqget_inode(struct xfs_inode *ip, uint type,
bool can_alloc, bool can_alloc,
struct xfs_dquot **dqpp); struct xfs_dquot **dqpp);
extern int xfs_qm_dqget_next(struct xfs_mount *mp, xfs_dqid_t id, int xfs_qm_dqget_next(struct xfs_mount *mp, xfs_dqid_t id,
uint type, struct xfs_dquot **dqpp); uint type, struct xfs_dquot **dqpp);
extern int xfs_qm_dqget_uncached(struct xfs_mount *mp, int xfs_qm_dqget_uncached(struct xfs_mount *mp,
xfs_dqid_t id, uint type, xfs_dqid_t id, uint type,
struct xfs_dquot **dqpp); struct xfs_dquot **dqpp);
extern void xfs_qm_dqput(xfs_dquot_t *); void xfs_qm_dqput(struct xfs_dquot *dqp);
extern void xfs_dqlock2(struct xfs_dquot *, struct xfs_dquot *); void xfs_dqlock2(struct xfs_dquot *, struct xfs_dquot *);
extern void xfs_dquot_set_prealloc_limits(struct xfs_dquot *); void xfs_dquot_set_prealloc_limits(struct xfs_dquot *);
static inline struct xfs_dquot *xfs_qm_dqhold(struct xfs_dquot *dqp) static inline struct xfs_dquot *xfs_qm_dqhold(struct xfs_dquot *dqp)
{ {

View File

@ -11,25 +11,27 @@ struct xfs_trans;
struct xfs_mount; struct xfs_mount;
struct xfs_qoff_logitem; struct xfs_qoff_logitem;
typedef struct xfs_dq_logitem { struct xfs_dq_logitem {
struct xfs_log_item qli_item; /* common portion */ struct xfs_log_item qli_item; /* common portion */
struct xfs_dquot *qli_dquot; /* dquot ptr */ struct xfs_dquot *qli_dquot; /* dquot ptr */
xfs_lsn_t qli_flush_lsn; /* lsn at last flush */ xfs_lsn_t qli_flush_lsn; /* lsn at last flush */
} xfs_dq_logitem_t; };
typedef struct xfs_qoff_logitem { struct xfs_qoff_logitem {
struct xfs_log_item qql_item; /* common portion */ struct xfs_log_item qql_item; /* common portion */
struct xfs_qoff_logitem *qql_start_lip; /* qoff-start logitem, if any */ struct xfs_qoff_logitem *qql_start_lip; /* qoff-start logitem, if any */
unsigned int qql_flags; unsigned int qql_flags;
} xfs_qoff_logitem_t; };
extern void xfs_qm_dquot_logitem_init(struct xfs_dquot *); void xfs_qm_dquot_logitem_init(struct xfs_dquot *dqp);
extern xfs_qoff_logitem_t *xfs_qm_qoff_logitem_init(struct xfs_mount *, struct xfs_qoff_logitem *xfs_qm_qoff_logitem_init(struct xfs_mount *mp,
struct xfs_qoff_logitem *, uint); struct xfs_qoff_logitem *start,
extern xfs_qoff_logitem_t *xfs_trans_get_qoff_item(struct xfs_trans *, uint flags);
struct xfs_qoff_logitem *, uint); struct xfs_qoff_logitem *xfs_trans_get_qoff_item(struct xfs_trans *tp,
extern void xfs_trans_log_quotaoff_item(struct xfs_trans *, struct xfs_qoff_logitem *startqoff,
struct xfs_qoff_logitem *); uint flags);
void xfs_trans_log_quotaoff_item(struct xfs_trans *tp,
struct xfs_qoff_logitem *qlp);
#endif /* __XFS_DQUOT_ITEM_H__ */ #endif /* __XFS_DQUOT_ITEM_H__ */

View File

@ -257,7 +257,7 @@ xfs_errortag_test(
xfs_warn_ratelimited(mp, xfs_warn_ratelimited(mp,
"Injecting error (%s) at file %s, line %d, on filesystem \"%s\"", "Injecting error (%s) at file %s, line %d, on filesystem \"%s\"",
expression, file, line, mp->m_fsname); expression, file, line, mp->m_super->s_id);
return true; return true;
} }
@ -329,18 +329,39 @@ xfs_corruption_error(
const char *tag, const char *tag,
int level, int level,
struct xfs_mount *mp, struct xfs_mount *mp,
void *buf, const void *buf,
size_t bufsize, size_t bufsize,
const char *filename, const char *filename,
int linenum, int linenum,
xfs_failaddr_t failaddr) xfs_failaddr_t failaddr)
{ {
if (level <= xfs_error_level) if (buf && level <= xfs_error_level)
xfs_hex_dump(buf, bufsize); xfs_hex_dump(buf, bufsize);
xfs_error_report(tag, level, mp, filename, linenum, failaddr); xfs_error_report(tag, level, mp, filename, linenum, failaddr);
xfs_alert(mp, "Corruption detected. Unmount and run xfs_repair"); xfs_alert(mp, "Corruption detected. Unmount and run xfs_repair");
} }
/*
* Complain about the kinds of metadata corruption that we can't detect from a
* verifier, such as incorrect inter-block relationship data. Does not set
* bp->b_error.
*/
void
xfs_buf_corruption_error(
struct xfs_buf *bp)
{
struct xfs_mount *mp = bp->b_mount;
xfs_alert_tag(mp, XFS_PTAG_VERIFIER_ERROR,
"Metadata corruption detected at %pS, %s block 0x%llx",
__return_address, bp->b_ops->name, bp->b_bn);
xfs_alert(mp, "Unmount and run xfs_repair");
if (xfs_error_level >= XFS_ERRLEVEL_HIGH)
xfs_stack_trace();
}
/* /*
* Warnings specifically for verifier errors. Differentiate CRC vs. invalid * Warnings specifically for verifier errors. Differentiate CRC vs. invalid
* values, and omit the stack trace unless the error level is tuned high. * values, and omit the stack trace unless the error level is tuned high.
@ -350,7 +371,7 @@ xfs_buf_verifier_error(
struct xfs_buf *bp, struct xfs_buf *bp,
int error, int error,
const char *name, const char *name,
void *buf, const void *buf,
size_t bufsz, size_t bufsz,
xfs_failaddr_t failaddr) xfs_failaddr_t failaddr)
{ {
@ -402,7 +423,7 @@ xfs_inode_verifier_error(
struct xfs_inode *ip, struct xfs_inode *ip,
int error, int error,
const char *name, const char *name,
void *buf, const void *buf,
size_t bufsz, size_t bufsz,
xfs_failaddr_t failaddr) xfs_failaddr_t failaddr)
{ {

View File

@ -12,16 +12,17 @@ extern void xfs_error_report(const char *tag, int level, struct xfs_mount *mp,
const char *filename, int linenum, const char *filename, int linenum,
xfs_failaddr_t failaddr); xfs_failaddr_t failaddr);
extern void xfs_corruption_error(const char *tag, int level, extern void xfs_corruption_error(const char *tag, int level,
struct xfs_mount *mp, void *buf, size_t bufsize, struct xfs_mount *mp, const void *buf, size_t bufsize,
const char *filename, int linenum, const char *filename, int linenum,
xfs_failaddr_t failaddr); xfs_failaddr_t failaddr);
void xfs_buf_corruption_error(struct xfs_buf *bp);
extern void xfs_buf_verifier_error(struct xfs_buf *bp, int error, extern void xfs_buf_verifier_error(struct xfs_buf *bp, int error,
const char *name, void *buf, size_t bufsz, const char *name, const void *buf, size_t bufsz,
xfs_failaddr_t failaddr); xfs_failaddr_t failaddr);
extern void xfs_verifier_error(struct xfs_buf *bp, int error, extern void xfs_verifier_error(struct xfs_buf *bp, int error,
xfs_failaddr_t failaddr); xfs_failaddr_t failaddr);
extern void xfs_inode_verifier_error(struct xfs_inode *ip, int error, extern void xfs_inode_verifier_error(struct xfs_inode *ip, int error,
const char *name, void *buf, size_t bufsz, const char *name, const void *buf, size_t bufsz,
xfs_failaddr_t failaddr); xfs_failaddr_t failaddr);
#define XFS_ERROR_REPORT(e, lvl, mp) \ #define XFS_ERROR_REPORT(e, lvl, mp) \
@ -37,32 +38,6 @@ extern void xfs_inode_verifier_error(struct xfs_inode *ip, int error,
/* Dump 128 bytes of any corrupt buffer */ /* Dump 128 bytes of any corrupt buffer */
#define XFS_CORRUPTION_DUMP_LEN (128) #define XFS_CORRUPTION_DUMP_LEN (128)
/*
* Macros to set EFSCORRUPTED & return/branch.
*/
#define XFS_WANT_CORRUPTED_GOTO(mp, x, l) \
{ \
int fs_is_ok = (x); \
ASSERT(fs_is_ok); \
if (unlikely(!fs_is_ok)) { \
XFS_ERROR_REPORT("XFS_WANT_CORRUPTED_GOTO", \
XFS_ERRLEVEL_LOW, mp); \
error = -EFSCORRUPTED; \
goto l; \
} \
}
#define XFS_WANT_CORRUPTED_RETURN(mp, x) \
{ \
int fs_is_ok = (x); \
ASSERT(fs_is_ok); \
if (unlikely(!fs_is_ok)) { \
XFS_ERROR_REPORT("XFS_WANT_CORRUPTED_RETURN", \
XFS_ERRLEVEL_LOW, mp); \
return -EFSCORRUPTED; \
} \
}
#ifdef DEBUG #ifdef DEBUG
extern int xfs_errortag_init(struct xfs_mount *mp); extern int xfs_errortag_init(struct xfs_mount *mp);
extern void xfs_errortag_del(struct xfs_mount *mp); extern void xfs_errortag_del(struct xfs_mount *mp);

View File

@ -367,7 +367,7 @@ restart:
* If this is a metadata allocation, try to reuse the busy * If this is a metadata allocation, try to reuse the busy
* extent instead of trimming the allocation. * extent instead of trimming the allocation.
*/ */
if (!xfs_alloc_is_userdata(args->datatype) && if (!(args->datatype & XFS_ALLOC_USERDATA) &&
!(busyp->flags & XFS_EXTENT_BUSY_DISCARDED)) { !(busyp->flags & XFS_EXTENT_BUSY_DISCARDED)) {
if (!xfs_extent_busy_update_extent(args->mp, args->pag, if (!xfs_extent_busy_update_extent(args->mp, args->pag,
busyp, fbno, flen, busyp, fbno, flen,

View File

@ -21,7 +21,7 @@
#include "xfs_alloc.h" #include "xfs_alloc.h"
#include "xfs_bmap.h" #include "xfs_bmap.h"
#include "xfs_trace.h" #include "xfs_trace.h"
#include "xfs_error.h"
kmem_zone_t *xfs_efi_zone; kmem_zone_t *xfs_efi_zone;
kmem_zone_t *xfs_efd_zone; kmem_zone_t *xfs_efd_zone;
@ -39,7 +39,7 @@ xfs_efi_item_free(
if (efip->efi_format.efi_nextents > XFS_EFI_MAX_FAST_EXTENTS) if (efip->efi_format.efi_nextents > XFS_EFI_MAX_FAST_EXTENTS)
kmem_free(efip); kmem_free(efip);
else else
kmem_zone_free(xfs_efi_zone, efip); kmem_cache_free(xfs_efi_zone, efip);
} }
/* /*
@ -228,6 +228,7 @@ xfs_efi_copy_format(xfs_log_iovec_t *buf, xfs_efi_log_format_t *dst_efi_fmt)
} }
return 0; return 0;
} }
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -243,7 +244,7 @@ xfs_efd_item_free(struct xfs_efd_log_item *efdp)
if (efdp->efd_format.efd_nextents > XFS_EFD_MAX_FAST_EXTENTS) if (efdp->efd_format.efd_nextents > XFS_EFD_MAX_FAST_EXTENTS)
kmem_free(efdp); kmem_free(efdp);
else else
kmem_zone_free(xfs_efd_zone, efdp); kmem_cache_free(xfs_efd_zone, efdp);
} }
/* /*
@ -624,7 +625,7 @@ xfs_efi_recover(
*/ */
set_bit(XFS_EFI_RECOVERED, &efip->efi_flags); set_bit(XFS_EFI_RECOVERED, &efip->efi_flags);
xfs_efi_release(efip); xfs_efi_release(efip);
return -EIO; return -EFSCORRUPTED;
} }
} }

View File

@ -188,7 +188,8 @@ xfs_file_dio_aio_read(
file_accessed(iocb->ki_filp); file_accessed(iocb->ki_filp);
xfs_ilock(ip, XFS_IOLOCK_SHARED); xfs_ilock(ip, XFS_IOLOCK_SHARED);
ret = iomap_dio_rw(iocb, to, &xfs_iomap_ops, NULL, is_sync_kiocb(iocb)); ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, NULL,
is_sync_kiocb(iocb));
xfs_iunlock(ip, XFS_IOLOCK_SHARED); xfs_iunlock(ip, XFS_IOLOCK_SHARED);
return ret; return ret;
@ -215,7 +216,7 @@ xfs_file_dax_read(
xfs_ilock(ip, XFS_IOLOCK_SHARED); xfs_ilock(ip, XFS_IOLOCK_SHARED);
} }
ret = dax_iomap_rw(iocb, to, &xfs_iomap_ops); ret = dax_iomap_rw(iocb, to, &xfs_read_iomap_ops);
xfs_iunlock(ip, XFS_IOLOCK_SHARED); xfs_iunlock(ip, XFS_IOLOCK_SHARED);
file_accessed(iocb->ki_filp); file_accessed(iocb->ki_filp);
@ -351,7 +352,7 @@ restart:
trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize); trace_xfs_zero_eof(ip, isize, iocb->ki_pos - isize);
error = iomap_zero_range(inode, isize, iocb->ki_pos - isize, error = iomap_zero_range(inode, isize, iocb->ki_pos - isize,
NULL, &xfs_iomap_ops); NULL, &xfs_buffered_write_iomap_ops);
if (error) if (error)
return error; return error;
} else } else
@ -486,8 +487,7 @@ xfs_file_dio_aio_write(
int unaligned_io = 0; int unaligned_io = 0;
int iolock; int iolock;
size_t count = iov_iter_count(from); size_t count = iov_iter_count(from);
struct xfs_buftarg *target = XFS_IS_REALTIME_INODE(ip) ? struct xfs_buftarg *target = xfs_inode_buftarg(ip);
mp->m_rtdev_targp : mp->m_ddev_targp;
/* DIO must be aligned to device logical sector size */ /* DIO must be aligned to device logical sector size */
if ((iocb->ki_pos | count) & target->bt_logical_sectormask) if ((iocb->ki_pos | count) & target->bt_logical_sectormask)
@ -551,7 +551,8 @@ xfs_file_dio_aio_write(
* If unaligned, this is the only IO in-flight. Wait on it before we * If unaligned, this is the only IO in-flight. Wait on it before we
* release the iolock to prevent subsequent overlapping IO. * release the iolock to prevent subsequent overlapping IO.
*/ */
ret = iomap_dio_rw(iocb, from, &xfs_iomap_ops, &xfs_dio_write_ops, ret = iomap_dio_rw(iocb, from, &xfs_direct_write_iomap_ops,
&xfs_dio_write_ops,
is_sync_kiocb(iocb) || unaligned_io); is_sync_kiocb(iocb) || unaligned_io);
out: out:
xfs_iunlock(ip, iolock); xfs_iunlock(ip, iolock);
@ -591,7 +592,7 @@ xfs_file_dax_write(
count = iov_iter_count(from); count = iov_iter_count(from);
trace_xfs_file_dax_write(ip, count, pos); trace_xfs_file_dax_write(ip, count, pos);
ret = dax_iomap_rw(iocb, from, &xfs_iomap_ops); ret = dax_iomap_rw(iocb, from, &xfs_direct_write_iomap_ops);
if (ret > 0 && iocb->ki_pos > i_size_read(inode)) { if (ret > 0 && iocb->ki_pos > i_size_read(inode)) {
i_size_write(inode, iocb->ki_pos); i_size_write(inode, iocb->ki_pos);
error = xfs_setfilesize(ip, pos, ret); error = xfs_setfilesize(ip, pos, ret);
@ -638,7 +639,8 @@ write_retry:
current->backing_dev_info = inode_to_bdi(inode); current->backing_dev_info = inode_to_bdi(inode);
trace_xfs_file_buffered_write(ip, iov_iter_count(from), iocb->ki_pos); trace_xfs_file_buffered_write(ip, iov_iter_count(from), iocb->ki_pos);
ret = iomap_file_buffered_write(iocb, from, &xfs_iomap_ops); ret = iomap_file_buffered_write(iocb, from,
&xfs_buffered_write_iomap_ops);
if (likely(ret >= 0)) if (likely(ret >= 0))
iocb->ki_pos += ret; iocb->ki_pos += ret;
@ -815,6 +817,36 @@ xfs_file_fallocate(
if (error) if (error)
goto out_unlock; goto out_unlock;
/*
* Must wait for all AIO to complete before we continue as AIO can
* change the file size on completion without holding any locks we
* currently hold. We must do this first because AIO can update both
* the on disk and in memory inode sizes, and the operations that follow
* require the in-memory size to be fully up-to-date.
*/
inode_dio_wait(inode);
/*
* Now AIO and DIO has drained we flush and (if necessary) invalidate
* the cached range over the first operation we are about to run.
*
* We care about zero and collapse here because they both run a hole
* punch over the range first. Because that can zero data, and the range
* of invalidation for the shift operations is much larger, we still do
* the required flush for collapse in xfs_prepare_shift().
*
* Insert has the same range requirements as collapse, and we extend the
* file first which can zero data. Hence insert has the same
* flush/invalidate requirements as collapse and so they are both
* handled at the right time by xfs_prepare_shift().
*/
if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_ZERO_RANGE |
FALLOC_FL_COLLAPSE_RANGE)) {
error = xfs_flush_unmap_range(ip, offset, len);
if (error)
goto out_unlock;
}
if (mode & FALLOC_FL_PUNCH_HOLE) { if (mode & FALLOC_FL_PUNCH_HOLE) {
error = xfs_free_file_space(ip, offset, len); error = xfs_free_file_space(ip, offset, len);
if (error) if (error)
@ -878,16 +910,30 @@ xfs_file_fallocate(
} }
if (mode & FALLOC_FL_ZERO_RANGE) { if (mode & FALLOC_FL_ZERO_RANGE) {
error = xfs_zero_file_space(ip, offset, len); /*
* Punch a hole and prealloc the range. We use a hole
* punch rather than unwritten extent conversion for two
* reasons:
*
* 1.) Hole punch handles partial block zeroing for us.
* 2.) If prealloc returns ENOSPC, the file range is
* still zero-valued by virtue of the hole punch.
*/
unsigned int blksize = i_blocksize(inode);
trace_xfs_zero_file_space(ip);
error = xfs_free_file_space(ip, offset, len);
if (error)
goto out_unlock;
len = round_up(offset + len, blksize) -
round_down(offset, blksize);
offset = round_down(offset, blksize);
} else if (mode & FALLOC_FL_UNSHARE_RANGE) { } else if (mode & FALLOC_FL_UNSHARE_RANGE) {
error = xfs_reflink_unshare(ip, offset, len); error = xfs_reflink_unshare(ip, offset, len);
if (error) if (error)
goto out_unlock; goto out_unlock;
if (!xfs_is_always_cow_inode(ip)) {
error = xfs_alloc_file_space(ip, offset, len,
XFS_BMAPI_PREALLOC);
}
} else { } else {
/* /*
* If always_cow mode we can't use preallocations and * If always_cow mode we can't use preallocations and
@ -897,12 +943,14 @@ xfs_file_fallocate(
error = -EOPNOTSUPP; error = -EOPNOTSUPP;
goto out_unlock; goto out_unlock;
} }
}
if (!xfs_is_always_cow_inode(ip)) {
error = xfs_alloc_file_space(ip, offset, len, error = xfs_alloc_file_space(ip, offset, len,
XFS_BMAPI_PREALLOC); XFS_BMAPI_PREALLOC);
if (error)
goto out_unlock;
} }
if (error)
goto out_unlock;
} }
if (file->f_flags & O_DSYNC) if (file->f_flags & O_DSYNC)
@ -1056,7 +1104,7 @@ xfs_dir_open(
*/ */
mode = xfs_ilock_data_map_shared(ip); mode = xfs_ilock_data_map_shared(ip);
if (ip->i_d.di_nextents > 0) if (ip->i_d.di_nextents > 0)
error = xfs_dir3_data_readahead(ip, 0, -1); error = xfs_dir3_data_readahead(ip, 0, 0);
xfs_iunlock(ip, mode); xfs_iunlock(ip, mode);
return error; return error;
} }
@ -1153,12 +1201,16 @@ __xfs_filemap_fault(
if (IS_DAX(inode)) { if (IS_DAX(inode)) {
pfn_t pfn; pfn_t pfn;
ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL, &xfs_iomap_ops); ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
(write_fault && !vmf->cow_page) ?
&xfs_direct_write_iomap_ops :
&xfs_read_iomap_ops);
if (ret & VM_FAULT_NEEDDSYNC) if (ret & VM_FAULT_NEEDDSYNC)
ret = dax_finish_sync_fault(vmf, pe_size, pfn); ret = dax_finish_sync_fault(vmf, pe_size, pfn);
} else { } else {
if (write_fault) if (write_fault)
ret = iomap_page_mkwrite(vmf, &xfs_iomap_ops); ret = iomap_page_mkwrite(vmf,
&xfs_buffered_write_iomap_ops);
else else
ret = filemap_fault(vmf); ret = filemap_fault(vmf);
} }
@ -1222,22 +1274,22 @@ static const struct vm_operations_struct xfs_file_vm_ops = {
STATIC int STATIC int
xfs_file_mmap( xfs_file_mmap(
struct file *filp, struct file *file,
struct vm_area_struct *vma) struct vm_area_struct *vma)
{ {
struct dax_device *dax_dev; struct inode *inode = file_inode(file);
struct xfs_buftarg *target = xfs_inode_buftarg(XFS_I(inode));
dax_dev = xfs_find_daxdev_for_inode(file_inode(filp));
/* /*
* We don't support synchronous mappings for non-DAX files and * We don't support synchronous mappings for non-DAX files and
* for DAX files if underneath dax_device is not synchronous. * for DAX files if underneath dax_device is not synchronous.
*/ */
if (!daxdev_mapping_supported(vma, dax_dev)) if (!daxdev_mapping_supported(vma, target->bt_daxdev))
return -EOPNOTSUPP; return -EOPNOTSUPP;
file_accessed(filp); file_accessed(file);
vma->vm_ops = &xfs_file_vm_ops; vma->vm_ops = &xfs_file_vm_ops;
if (IS_DAX(file_inode(filp))) if (IS_DAX(inode))
vma->vm_flags |= VM_HUGEPAGE; vma->vm_flags |= VM_HUGEPAGE;
return 0; return 0;
} }

View File

@ -18,6 +18,7 @@
#include "xfs_trace.h" #include "xfs_trace.h"
#include "xfs_ag_resv.h" #include "xfs_ag_resv.h"
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_filestream.h"
struct xfs_fstrm_item { struct xfs_fstrm_item {
struct xfs_mru_cache_elem mru; struct xfs_mru_cache_elem mru;
@ -374,7 +375,7 @@ xfs_filestream_new_ag(
startag = (item->ag + 1) % mp->m_sb.sb_agcount; startag = (item->ag + 1) % mp->m_sb.sb_agcount;
} }
if (xfs_alloc_is_userdata(ap->datatype)) if (ap->datatype & XFS_ALLOC_USERDATA)
flags |= XFS_PICK_USERDATA; flags |= XFS_PICK_USERDATA;
if (ap->tp->t_flags & XFS_TRANS_LOWMODE) if (ap->tp->t_flags & XFS_TRANS_LOWMODE)
flags |= XFS_PICK_LOWSPACE; flags |= XFS_PICK_LOWSPACE;

View File

@ -146,6 +146,7 @@ xfs_fsmap_owner_from_rmap(
dest->fmr_owner = XFS_FMR_OWN_FREE; dest->fmr_owner = XFS_FMR_OWN_FREE;
break; break;
default: default:
ASSERT(0);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;

View File

@ -44,7 +44,7 @@ xfs_inode_alloc(
if (!ip) if (!ip)
return NULL; return NULL;
if (inode_init_always(mp->m_super, VFS_I(ip))) { if (inode_init_always(mp->m_super, VFS_I(ip))) {
kmem_zone_free(xfs_inode_zone, ip); kmem_cache_free(xfs_inode_zone, ip);
return NULL; return NULL;
} }
@ -104,7 +104,7 @@ xfs_inode_free_callback(
ip->i_itemp = NULL; ip->i_itemp = NULL;
} }
kmem_zone_free(xfs_inode_zone, ip); kmem_cache_free(xfs_inode_zone, ip);
} }
static void static void
@ -1419,7 +1419,7 @@ xfs_inode_match_id(
return 0; return 0;
if ((eofb->eof_flags & XFS_EOF_FLAGS_PRID) && if ((eofb->eof_flags & XFS_EOF_FLAGS_PRID) &&
xfs_get_projid(ip) != eofb->eof_prid) ip->i_d.di_projid != eofb->eof_prid)
return 0; return 0;
return 1; return 1;
@ -1443,7 +1443,7 @@ xfs_inode_match_id_union(
return 1; return 1;
if ((eofb->eof_flags & XFS_EOF_FLAGS_PRID) && if ((eofb->eof_flags & XFS_EOF_FLAGS_PRID) &&
xfs_get_projid(ip) == eofb->eof_prid) ip->i_d.di_projid == eofb->eof_prid)
return 1; return 1;
return 0; return 0;

View File

@ -55,7 +55,7 @@ STATIC void
xfs_icreate_item_release( xfs_icreate_item_release(
struct xfs_log_item *lip) struct xfs_log_item *lip)
{ {
kmem_zone_free(xfs_icreate_zone, ICR_ITEM(lip)); kmem_cache_free(xfs_icreate_zone, ICR_ITEM(lip));
} }
static const struct xfs_item_ops xfs_icreate_item_ops = { static const struct xfs_item_ops xfs_icreate_item_ops = {

View File

@ -55,6 +55,12 @@ xfs_extlen_t
xfs_get_extsz_hint( xfs_get_extsz_hint(
struct xfs_inode *ip) struct xfs_inode *ip)
{ {
/*
* No point in aligning allocations if we need to COW to actually
* write to them.
*/
if (xfs_is_always_cow_inode(ip))
return 0;
if ((ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) && ip->i_d.di_extsize) if ((ip->i_d.di_flags & XFS_DIFLAG_EXTSIZE) && ip->i_d.di_extsize)
return ip->i_d.di_extsize; return ip->i_d.di_extsize;
if (XFS_IS_REALTIME_INODE(ip)) if (XFS_IS_REALTIME_INODE(ip))
@ -809,7 +815,7 @@ xfs_ialloc(
ip->i_d.di_uid = xfs_kuid_to_uid(current_fsuid()); ip->i_d.di_uid = xfs_kuid_to_uid(current_fsuid());
ip->i_d.di_gid = xfs_kgid_to_gid(current_fsgid()); ip->i_d.di_gid = xfs_kgid_to_gid(current_fsgid());
inode->i_rdev = rdev; inode->i_rdev = rdev;
xfs_set_projid(ip, prid); ip->i_d.di_projid = prid;
if (pip && XFS_INHERIT_GID(pip)) { if (pip && XFS_INHERIT_GID(pip)) {
ip->i_d.di_gid = pip->i_d.di_gid; ip->i_d.di_gid = pip->i_d.di_gid;
@ -845,8 +851,7 @@ xfs_ialloc(
inode_set_iversion(inode, 1); inode_set_iversion(inode, 1);
ip->i_d.di_flags2 = 0; ip->i_d.di_flags2 = 0;
ip->i_d.di_cowextsize = 0; ip->i_d.di_cowextsize = 0;
ip->i_d.di_crtime.t_sec = (int32_t)tv.tv_sec; ip->i_d.di_crtime = tv;
ip->i_d.di_crtime.t_nsec = (int32_t)tv.tv_nsec;
} }
@ -1418,7 +1423,7 @@ xfs_link(
* the tree quota mechanism could be circumvented. * the tree quota mechanism could be circumvented.
*/ */
if (unlikely((tdp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) && if (unlikely((tdp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
(xfs_get_projid(tdp) != xfs_get_projid(sip)))) { tdp->i_d.di_projid != sip->i_d.di_projid)) {
error = -EXDEV; error = -EXDEV;
goto error_return; goto error_return;
} }
@ -2130,8 +2135,10 @@ xfs_iunlink_update_bucket(
* passed in because either we're adding or removing ourselves from the * passed in because either we're adding or removing ourselves from the
* head of the list. * head of the list.
*/ */
if (old_value == new_agino) if (old_value == new_agino) {
xfs_buf_corruption_error(agibp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
agi->agi_unlinked[bucket_index] = cpu_to_be32(new_agino); agi->agi_unlinked[bucket_index] = cpu_to_be32(new_agino);
offset = offsetof(struct xfs_agi, agi_unlinked) + offset = offsetof(struct xfs_agi, agi_unlinked) +
@ -2194,6 +2201,8 @@ xfs_iunlink_update_inode(
/* Make sure the old pointer isn't garbage. */ /* Make sure the old pointer isn't garbage. */
old_value = be32_to_cpu(dip->di_next_unlinked); old_value = be32_to_cpu(dip->di_next_unlinked);
if (!xfs_verify_agino_or_null(mp, agno, old_value)) { if (!xfs_verify_agino_or_null(mp, agno, old_value)) {
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__, dip,
sizeof(*dip), __this_address);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
goto out; goto out;
} }
@ -2205,8 +2214,11 @@ xfs_iunlink_update_inode(
*/ */
*old_next_agino = old_value; *old_next_agino = old_value;
if (old_value == next_agino) { if (old_value == next_agino) {
if (next_agino != NULLAGINO) if (next_agino != NULLAGINO) {
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__,
dip, sizeof(*dip), __this_address);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
}
goto out; goto out;
} }
@ -2257,8 +2269,10 @@ xfs_iunlink(
*/ */
next_agino = be32_to_cpu(agi->agi_unlinked[bucket_index]); next_agino = be32_to_cpu(agi->agi_unlinked[bucket_index]);
if (next_agino == agino || if (next_agino == agino ||
!xfs_verify_agino_or_null(mp, agno, next_agino)) !xfs_verify_agino_or_null(mp, agno, next_agino)) {
xfs_buf_corruption_error(agibp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
if (next_agino != NULLAGINO) { if (next_agino != NULLAGINO) {
struct xfs_perag *pag; struct xfs_perag *pag;
@ -3196,6 +3210,7 @@ xfs_rename(
struct xfs_trans *tp; struct xfs_trans *tp;
struct xfs_inode *wip = NULL; /* whiteout inode */ struct xfs_inode *wip = NULL; /* whiteout inode */
struct xfs_inode *inodes[__XFS_SORT_INODES]; struct xfs_inode *inodes[__XFS_SORT_INODES];
struct xfs_buf *agibp;
int num_inodes = __XFS_SORT_INODES; int num_inodes = __XFS_SORT_INODES;
bool new_parent = (src_dp != target_dp); bool new_parent = (src_dp != target_dp);
bool src_is_directory = S_ISDIR(VFS_I(src_ip)->i_mode); bool src_is_directory = S_ISDIR(VFS_I(src_ip)->i_mode);
@ -3270,7 +3285,7 @@ xfs_rename(
* tree quota mechanism would be circumvented. * tree quota mechanism would be circumvented.
*/ */
if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) && if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
(xfs_get_projid(target_dp) != xfs_get_projid(src_ip)))) { target_dp->i_d.di_projid != src_ip->i_d.di_projid)) {
error = -EXDEV; error = -EXDEV;
goto out_trans_cancel; goto out_trans_cancel;
} }
@ -3327,7 +3342,6 @@ xfs_rename(
goto out_trans_cancel; goto out_trans_cancel;
xfs_bumplink(tp, wip); xfs_bumplink(tp, wip);
xfs_trans_log_inode(tp, wip, XFS_ILOG_CORE);
VFS_I(wip)->i_state &= ~I_LINKABLE; VFS_I(wip)->i_state &= ~I_LINKABLE;
} }
@ -3361,6 +3375,22 @@ xfs_rename(
* In case there is already an entry with the same * In case there is already an entry with the same
* name at the destination directory, remove it first. * name at the destination directory, remove it first.
*/ */
/*
* Check whether the replace operation will need to allocate
* blocks. This happens when the shortform directory lacks
* space and we have to convert it to a block format directory.
* When more blocks are necessary, we must lock the AGI first
* to preserve locking order (AGI -> AGF).
*/
if (xfs_dir2_sf_replace_needblock(target_dp, src_ip->i_ino)) {
error = xfs_read_agi(mp, tp,
XFS_INO_TO_AGNO(mp, target_ip->i_ino),
&agibp);
if (error)
goto out_trans_cancel;
}
error = xfs_dir_replace(tp, target_dp, target_name, error = xfs_dir_replace(tp, target_dp, target_name,
src_ip->i_ino, spaceres); src_ip->i_ino, spaceres);
if (error) if (error)

View File

@ -37,9 +37,6 @@ typedef struct xfs_inode {
struct xfs_ifork *i_cowfp; /* copy on write extents */ struct xfs_ifork *i_cowfp; /* copy on write extents */
struct xfs_ifork i_df; /* data fork */ struct xfs_ifork i_df; /* data fork */
/* operations vectors */
const struct xfs_dir_ops *d_ops; /* directory ops vector */
/* Transaction and locking information. */ /* Transaction and locking information. */
struct xfs_inode_log_item *i_itemp; /* logging information */ struct xfs_inode_log_item *i_itemp; /* logging information */
mrlock_t i_lock; /* inode lock */ mrlock_t i_lock; /* inode lock */
@ -177,30 +174,11 @@ xfs_iflags_test_and_set(xfs_inode_t *ip, unsigned short flags)
return ret; return ret;
} }
/*
* Project quota id helpers (previously projid was 16bit only
* and using two 16bit values to hold new 32bit projid was chosen
* to retain compatibility with "old" filesystems).
*/
static inline prid_t
xfs_get_projid(struct xfs_inode *ip)
{
return (prid_t)ip->i_d.di_projid_hi << 16 | ip->i_d.di_projid_lo;
}
static inline void
xfs_set_projid(struct xfs_inode *ip,
prid_t projid)
{
ip->i_d.di_projid_hi = (uint16_t) (projid >> 16);
ip->i_d.di_projid_lo = (uint16_t) (projid & 0xffff);
}
static inline prid_t static inline prid_t
xfs_get_initial_prid(struct xfs_inode *dp) xfs_get_initial_prid(struct xfs_inode *dp)
{ {
if (dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) if (dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT)
return xfs_get_projid(dp); return dp->i_d.di_projid;
return XFS_PROJID_DEFAULT; return XFS_PROJID_DEFAULT;
} }
@ -219,6 +197,13 @@ static inline bool xfs_inode_has_cow_data(struct xfs_inode *ip)
return ip->i_cowfp && ip->i_cowfp->if_bytes; return ip->i_cowfp && ip->i_cowfp->if_bytes;
} }
/*
* Return the buftarg used for data allocations on a given inode.
*/
#define xfs_inode_buftarg(ip) \
(XFS_IS_REALTIME_INODE(ip) ? \
(ip)->i_mount->m_rtdev_targp : (ip)->i_mount->m_ddev_targp)
/* /*
* In-core inode flags. * In-core inode flags.
*/ */

View File

@ -17,6 +17,7 @@
#include "xfs_trans_priv.h" #include "xfs_trans_priv.h"
#include "xfs_buf_item.h" #include "xfs_buf_item.h"
#include "xfs_log.h" #include "xfs_log.h"
#include "xfs_error.h"
#include <linux/iversion.h> #include <linux/iversion.h>
@ -309,8 +310,8 @@ xfs_inode_to_log_dinode(
to->di_format = from->di_format; to->di_format = from->di_format;
to->di_uid = from->di_uid; to->di_uid = from->di_uid;
to->di_gid = from->di_gid; to->di_gid = from->di_gid;
to->di_projid_lo = from->di_projid_lo; to->di_projid_lo = from->di_projid & 0xffff;
to->di_projid_hi = from->di_projid_hi; to->di_projid_hi = from->di_projid >> 16;
memset(to->di_pad, 0, sizeof(to->di_pad)); memset(to->di_pad, 0, sizeof(to->di_pad));
memset(to->di_pad3, 0, sizeof(to->di_pad3)); memset(to->di_pad3, 0, sizeof(to->di_pad3));
@ -340,8 +341,8 @@ xfs_inode_to_log_dinode(
if (from->di_version == 3) { if (from->di_version == 3) {
to->di_changecount = inode_peek_iversion(inode); to->di_changecount = inode_peek_iversion(inode);
to->di_crtime.t_sec = from->di_crtime.t_sec; to->di_crtime.t_sec = from->di_crtime.tv_sec;
to->di_crtime.t_nsec = from->di_crtime.t_nsec; to->di_crtime.t_nsec = from->di_crtime.tv_nsec;
to->di_flags2 = from->di_flags2; to->di_flags2 = from->di_flags2;
to->di_cowextsize = from->di_cowextsize; to->di_cowextsize = from->di_cowextsize;
to->di_ino = ip->i_ino; to->di_ino = ip->i_ino;
@ -666,7 +667,7 @@ xfs_inode_item_destroy(
xfs_inode_t *ip) xfs_inode_t *ip)
{ {
kmem_free(ip->i_itemp->ili_item.li_lv_shadow); kmem_free(ip->i_itemp->ili_item.li_lv_shadow);
kmem_zone_free(xfs_ili_zone, ip->i_itemp); kmem_cache_free(xfs_ili_zone, ip->i_itemp);
} }
@ -828,8 +829,10 @@ xfs_inode_item_format_convert(
{ {
struct xfs_inode_log_format_32 *in_f32 = buf->i_addr; struct xfs_inode_log_format_32 *in_f32 = buf->i_addr;
if (buf->i_len != sizeof(*in_f32)) if (buf->i_len != sizeof(*in_f32)) {
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
in_f->ilf_type = in_f32->ilf_type; in_f->ilf_type = in_f32->ilf_type;
in_f->ilf_size = in_f32->ilf_size; in_f->ilf_size = in_f32->ilf_size;

View File

@ -33,6 +33,8 @@
#include "xfs_sb.h" #include "xfs_sb.h"
#include "xfs_ag.h" #include "xfs_ag.h"
#include "xfs_health.h" #include "xfs_health.h"
#include "xfs_reflink.h"
#include "xfs_ioctl.h"
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/namei.h> #include <linux/namei.h>
@ -290,82 +292,6 @@ xfs_readlink_by_handle(
return error; return error;
} }
int
xfs_set_dmattrs(
xfs_inode_t *ip,
uint evmask,
uint16_t state)
{
xfs_mount_t *mp = ip->i_mount;
xfs_trans_t *tp;
int error;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (XFS_FORCED_SHUTDOWN(mp))
return -EIO;
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ichange, 0, 0, 0, &tp);
if (error)
return error;
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
ip->i_d.di_dmevmask = evmask;
ip->i_d.di_dmstate = state;
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
error = xfs_trans_commit(tp);
return error;
}
STATIC int
xfs_fssetdm_by_handle(
struct file *parfilp,
void __user *arg)
{
int error;
struct fsdmidata fsd;
xfs_fsop_setdm_handlereq_t dmhreq;
struct dentry *dentry;
if (!capable(CAP_MKNOD))
return -EPERM;
if (copy_from_user(&dmhreq, arg, sizeof(xfs_fsop_setdm_handlereq_t)))
return -EFAULT;
error = mnt_want_write_file(parfilp);
if (error)
return error;
dentry = xfs_handlereq_to_dentry(parfilp, &dmhreq.hreq);
if (IS_ERR(dentry)) {
mnt_drop_write_file(parfilp);
return PTR_ERR(dentry);
}
if (IS_IMMUTABLE(d_inode(dentry)) || IS_APPEND(d_inode(dentry))) {
error = -EPERM;
goto out;
}
if (copy_from_user(&fsd, dmhreq.data, sizeof(fsd))) {
error = -EFAULT;
goto out;
}
error = xfs_set_dmattrs(XFS_I(d_inode(dentry)), fsd.fsd_dmevmask,
fsd.fsd_dmstate);
out:
mnt_drop_write_file(parfilp);
dput(dentry);
return error;
}
STATIC int STATIC int
xfs_attrlist_by_handle( xfs_attrlist_by_handle(
struct file *parfilp, struct file *parfilp,
@ -588,13 +514,12 @@ xfs_attrmulti_by_handle(
int int
xfs_ioc_space( xfs_ioc_space(
struct file *filp, struct file *filp,
unsigned int cmd,
xfs_flock64_t *bf) xfs_flock64_t *bf)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct xfs_inode *ip = XFS_I(inode); struct xfs_inode *ip = XFS_I(inode);
struct iattr iattr; struct iattr iattr;
enum xfs_prealloc_flags flags = 0; enum xfs_prealloc_flags flags = XFS_PREALLOC_CLEAR;
uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL; uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
int error; int error;
@ -607,6 +532,9 @@ xfs_ioc_space(
if (!S_ISREG(inode->i_mode)) if (!S_ISREG(inode->i_mode))
return -EINVAL; return -EINVAL;
if (xfs_is_always_cow_inode(ip))
return -EOPNOTSUPP;
if (filp->f_flags & O_DSYNC) if (filp->f_flags & O_DSYNC)
flags |= XFS_PREALLOC_SYNC; flags |= XFS_PREALLOC_SYNC;
if (filp->f_mode & FMODE_NOCMTIME) if (filp->f_mode & FMODE_NOCMTIME)
@ -620,6 +548,7 @@ xfs_ioc_space(
error = xfs_break_layouts(inode, &iolock, BREAK_UNMAP); error = xfs_break_layouts(inode, &iolock, BREAK_UNMAP);
if (error) if (error)
goto out_unlock; goto out_unlock;
inode_dio_wait(inode);
switch (bf->l_whence) { switch (bf->l_whence) {
case 0: /*SEEK_SET*/ case 0: /*SEEK_SET*/
@ -635,73 +564,21 @@ xfs_ioc_space(
goto out_unlock; goto out_unlock;
} }
/* if (bf->l_start < 0 || bf->l_start > inode->i_sb->s_maxbytes) {
* length of <= 0 for resv/unresv/zero is invalid. length for
* alloc/free is ignored completely and we have no idea what userspace
* might have set it to, so set it to zero to allow range
* checks to pass.
*/
switch (cmd) {
case XFS_IOC_ZERO_RANGE:
case XFS_IOC_RESVSP:
case XFS_IOC_RESVSP64:
case XFS_IOC_UNRESVSP:
case XFS_IOC_UNRESVSP64:
if (bf->l_len <= 0) {
error = -EINVAL;
goto out_unlock;
}
break;
default:
bf->l_len = 0;
break;
}
if (bf->l_start < 0 ||
bf->l_start > inode->i_sb->s_maxbytes ||
bf->l_start + bf->l_len < 0 ||
bf->l_start + bf->l_len >= inode->i_sb->s_maxbytes) {
error = -EINVAL; error = -EINVAL;
goto out_unlock; goto out_unlock;
} }
switch (cmd) { if (bf->l_start > XFS_ISIZE(ip)) {
case XFS_IOC_ZERO_RANGE: error = xfs_alloc_file_space(ip, XFS_ISIZE(ip),
flags |= XFS_PREALLOC_SET; bf->l_start - XFS_ISIZE(ip), 0);
error = xfs_zero_file_space(ip, bf->l_start, bf->l_len); if (error)
break; goto out_unlock;
case XFS_IOC_RESVSP:
case XFS_IOC_RESVSP64:
flags |= XFS_PREALLOC_SET;
error = xfs_alloc_file_space(ip, bf->l_start, bf->l_len,
XFS_BMAPI_PREALLOC);
break;
case XFS_IOC_UNRESVSP:
case XFS_IOC_UNRESVSP64:
error = xfs_free_file_space(ip, bf->l_start, bf->l_len);
break;
case XFS_IOC_ALLOCSP:
case XFS_IOC_ALLOCSP64:
case XFS_IOC_FREESP:
case XFS_IOC_FREESP64:
flags |= XFS_PREALLOC_CLEAR;
if (bf->l_start > XFS_ISIZE(ip)) {
error = xfs_alloc_file_space(ip, XFS_ISIZE(ip),
bf->l_start - XFS_ISIZE(ip), 0);
if (error)
goto out_unlock;
}
iattr.ia_valid = ATTR_SIZE;
iattr.ia_size = bf->l_start;
error = xfs_vn_setattr_size(file_dentry(filp), &iattr);
break;
default:
ASSERT(0);
error = -EINVAL;
} }
iattr.ia_valid = ATTR_SIZE;
iattr.ia_size = bf->l_start;
error = xfs_vn_setattr_size(file_dentry(filp), &iattr);
if (error) if (error)
goto out_unlock; goto out_unlock;
@ -1116,7 +993,7 @@ xfs_fill_fsxattr(
fa->fsx_extsize = ip->i_d.di_extsize << ip->i_mount->m_sb.sb_blocklog; fa->fsx_extsize = ip->i_d.di_extsize << ip->i_mount->m_sb.sb_blocklog;
fa->fsx_cowextsize = ip->i_d.di_cowextsize << fa->fsx_cowextsize = ip->i_d.di_cowextsize <<
ip->i_mount->m_sb.sb_blocklog; ip->i_mount->m_sb.sb_blocklog;
fa->fsx_projid = xfs_get_projid(ip); fa->fsx_projid = ip->i_d.di_projid;
if (attr) { if (attr) {
if (ip->i_afp) { if (ip->i_afp) {
@ -1311,10 +1188,9 @@ xfs_ioctl_setattr_dax_invalidate(
* have to check the device for dax support or flush pagecache. * have to check the device for dax support or flush pagecache.
*/ */
if (fa->fsx_xflags & FS_XFLAG_DAX) { if (fa->fsx_xflags & FS_XFLAG_DAX) {
if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode))) struct xfs_buftarg *target = xfs_inode_buftarg(ip);
return -EINVAL;
if (!bdev_dax_supported(xfs_find_bdev_for_inode(VFS_I(ip)), if (!bdev_dax_supported(target->bt_bdev, sb->s_blocksize))
sb->s_blocksize))
return -EINVAL; return -EINVAL;
} }
@ -1569,7 +1445,7 @@ xfs_ioctl_setattr(
} }
if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_PQUOTA_ON(mp) && if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_PQUOTA_ON(mp) &&
xfs_get_projid(ip) != fa->fsx_projid) { ip->i_d.di_projid != fa->fsx_projid) {
code = xfs_qm_vop_chown_reserve(tp, ip, udqp, NULL, pdqp, code = xfs_qm_vop_chown_reserve(tp, ip, udqp, NULL, pdqp,
capable(CAP_FOWNER) ? XFS_QMOPT_FORCE_RES : 0); capable(CAP_FOWNER) ? XFS_QMOPT_FORCE_RES : 0);
if (code) /* out of quota */ if (code) /* out of quota */
@ -1606,13 +1482,13 @@ xfs_ioctl_setattr(
VFS_I(ip)->i_mode &= ~(S_ISUID|S_ISGID); VFS_I(ip)->i_mode &= ~(S_ISUID|S_ISGID);
/* Change the ownerships and register project quota modifications */ /* Change the ownerships and register project quota modifications */
if (xfs_get_projid(ip) != fa->fsx_projid) { if (ip->i_d.di_projid != fa->fsx_projid) {
if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_PQUOTA_ON(mp)) { if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_PQUOTA_ON(mp)) {
olddquot = xfs_qm_vop_chown(tp, ip, olddquot = xfs_qm_vop_chown(tp, ip,
&ip->i_pdquot, pdqp); &ip->i_pdquot, pdqp);
} }
ASSERT(ip->i_d.di_version > 1); ASSERT(ip->i_d.di_version > 1);
xfs_set_projid(ip, fa->fsx_projid); ip->i_d.di_projid = fa->fsx_projid;
} }
/* /*
@ -2122,24 +1998,17 @@ xfs_file_ioctl(
return xfs_ioc_setlabel(filp, mp, arg); return xfs_ioc_setlabel(filp, mp, arg);
case XFS_IOC_ALLOCSP: case XFS_IOC_ALLOCSP:
case XFS_IOC_FREESP: case XFS_IOC_FREESP:
case XFS_IOC_RESVSP:
case XFS_IOC_UNRESVSP:
case XFS_IOC_ALLOCSP64: case XFS_IOC_ALLOCSP64:
case XFS_IOC_FREESP64: case XFS_IOC_FREESP64: {
case XFS_IOC_RESVSP64:
case XFS_IOC_UNRESVSP64:
case XFS_IOC_ZERO_RANGE: {
xfs_flock64_t bf; xfs_flock64_t bf;
if (copy_from_user(&bf, arg, sizeof(bf))) if (copy_from_user(&bf, arg, sizeof(bf)))
return -EFAULT; return -EFAULT;
return xfs_ioc_space(filp, cmd, &bf); return xfs_ioc_space(filp, &bf);
} }
case XFS_IOC_DIOINFO: { case XFS_IOC_DIOINFO: {
struct dioattr da; struct xfs_buftarg *target = xfs_inode_buftarg(ip);
xfs_buftarg_t *target = struct dioattr da;
XFS_IS_REALTIME_INODE(ip) ?
mp->m_rtdev_targp : mp->m_ddev_targp;
da.d_mem = da.d_miniosz = target->bt_logical_sectorsize; da.d_mem = da.d_miniosz = target->bt_logical_sectorsize;
da.d_maxiosz = INT_MAX & ~(da.d_miniosz - 1); da.d_maxiosz = INT_MAX & ~(da.d_miniosz - 1);
@ -2183,22 +2052,6 @@ xfs_file_ioctl(
case XFS_IOC_SETXFLAGS: case XFS_IOC_SETXFLAGS:
return xfs_ioc_setxflags(ip, filp, arg); return xfs_ioc_setxflags(ip, filp, arg);
case XFS_IOC_FSSETDM: {
struct fsdmidata dmi;
if (copy_from_user(&dmi, arg, sizeof(dmi)))
return -EFAULT;
error = mnt_want_write_file(filp);
if (error)
return error;
error = xfs_set_dmattrs(ip, dmi.fsd_dmevmask,
dmi.fsd_dmstate);
mnt_drop_write_file(filp);
return error;
}
case XFS_IOC_GETBMAP: case XFS_IOC_GETBMAP:
case XFS_IOC_GETBMAPA: case XFS_IOC_GETBMAPA:
case XFS_IOC_GETBMAPX: case XFS_IOC_GETBMAPX:
@ -2226,8 +2079,6 @@ xfs_file_ioctl(
return -EFAULT; return -EFAULT;
return xfs_open_by_handle(filp, &hreq); return xfs_open_by_handle(filp, &hreq);
} }
case XFS_IOC_FSSETDM_BY_HANDLE:
return xfs_fssetdm_by_handle(filp, arg);
case XFS_IOC_READLINK_BY_HANDLE: { case XFS_IOC_READLINK_BY_HANDLE: {
xfs_fsop_handlereq_t hreq; xfs_fsop_handlereq_t hreq;

View File

@ -9,7 +9,6 @@
extern int extern int
xfs_ioc_space( xfs_ioc_space(
struct file *filp, struct file *filp,
unsigned int cmd,
xfs_flock64_t *bf); xfs_flock64_t *bf);
int int
@ -71,12 +70,6 @@ xfs_file_compat_ioctl(
unsigned int cmd, unsigned int cmd,
unsigned long arg); unsigned long arg);
extern int
xfs_set_dmattrs(
struct xfs_inode *ip,
uint evmask,
uint16_t state);
struct xfs_ibulk; struct xfs_ibulk;
struct xfs_bstat; struct xfs_bstat;
struct xfs_inogrp; struct xfs_inogrp;

View File

@ -500,44 +500,6 @@ xfs_compat_attrmulti_by_handle(
return error; return error;
} }
STATIC int
xfs_compat_fssetdm_by_handle(
struct file *parfilp,
void __user *arg)
{
int error;
struct fsdmidata fsd;
compat_xfs_fsop_setdm_handlereq_t dmhreq;
struct dentry *dentry;
if (!capable(CAP_MKNOD))
return -EPERM;
if (copy_from_user(&dmhreq, arg,
sizeof(compat_xfs_fsop_setdm_handlereq_t)))
return -EFAULT;
dentry = xfs_compat_handlereq_to_dentry(parfilp, &dmhreq.hreq);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
if (IS_IMMUTABLE(d_inode(dentry)) || IS_APPEND(d_inode(dentry))) {
error = -EPERM;
goto out;
}
if (copy_from_user(&fsd, compat_ptr(dmhreq.data), sizeof(fsd))) {
error = -EFAULT;
goto out;
}
error = xfs_set_dmattrs(XFS_I(d_inode(dentry)), fsd.fsd_dmevmask,
fsd.fsd_dmstate);
out:
dput(dentry);
return error;
}
long long
xfs_file_compat_ioctl( xfs_file_compat_ioctl(
struct file *filp, struct file *filp,
@ -557,18 +519,13 @@ xfs_file_compat_ioctl(
case XFS_IOC_ALLOCSP_32: case XFS_IOC_ALLOCSP_32:
case XFS_IOC_FREESP_32: case XFS_IOC_FREESP_32:
case XFS_IOC_ALLOCSP64_32: case XFS_IOC_ALLOCSP64_32:
case XFS_IOC_FREESP64_32: case XFS_IOC_FREESP64_32: {
case XFS_IOC_RESVSP_32:
case XFS_IOC_UNRESVSP_32:
case XFS_IOC_RESVSP64_32:
case XFS_IOC_UNRESVSP64_32:
case XFS_IOC_ZERO_RANGE_32: {
struct xfs_flock64 bf; struct xfs_flock64 bf;
if (xfs_compat_flock64_copyin(&bf, arg)) if (xfs_compat_flock64_copyin(&bf, arg))
return -EFAULT; return -EFAULT;
cmd = _NATIVE_IOC(cmd, struct xfs_flock64); cmd = _NATIVE_IOC(cmd, struct xfs_flock64);
return xfs_ioc_space(filp, cmd, &bf); return xfs_ioc_space(filp, &bf);
} }
case XFS_IOC_FSGEOMETRY_V1_32: case XFS_IOC_FSGEOMETRY_V1_32:
return xfs_compat_ioc_fsgeometry_v1(mp, arg); return xfs_compat_ioc_fsgeometry_v1(mp, arg);
@ -651,8 +608,6 @@ xfs_file_compat_ioctl(
return xfs_compat_attrlist_by_handle(filp, arg); return xfs_compat_attrlist_by_handle(filp, arg);
case XFS_IOC_ATTRMULTI_BY_HANDLE_32: case XFS_IOC_ATTRMULTI_BY_HANDLE_32:
return xfs_compat_attrmulti_by_handle(filp, arg); return xfs_compat_attrmulti_by_handle(filp, arg);
case XFS_IOC_FSSETDM_BY_HANDLE_32:
return xfs_compat_fssetdm_by_handle(filp, arg);
default: default:
/* try the native version */ /* try the native version */
return xfs_file_ioctl(filp, cmd, (unsigned long)arg); return xfs_file_ioctl(filp, cmd, (unsigned long)arg);

View File

@ -99,7 +99,7 @@ typedef struct compat_xfs_fsop_handlereq {
_IOWR('X', 108, struct compat_xfs_fsop_handlereq) _IOWR('X', 108, struct compat_xfs_fsop_handlereq)
/* The bstat field in the swapext struct needs translation */ /* The bstat field in the swapext struct needs translation */
typedef struct compat_xfs_swapext { struct compat_xfs_swapext {
int64_t sx_version; /* version */ int64_t sx_version; /* version */
int64_t sx_fdtarget; /* fd of target file */ int64_t sx_fdtarget; /* fd of target file */
int64_t sx_fdtmp; /* fd of tmp file */ int64_t sx_fdtmp; /* fd of tmp file */
@ -107,7 +107,7 @@ typedef struct compat_xfs_swapext {
xfs_off_t sx_length; /* leng from offset */ xfs_off_t sx_length; /* leng from offset */
char sx_pad[16]; /* pad space, unused */ char sx_pad[16]; /* pad space, unused */
struct compat_xfs_bstat sx_stat; /* stat of target b4 copy */ struct compat_xfs_bstat sx_stat; /* stat of target b4 copy */
} __compat_packed compat_xfs_swapext_t; } __compat_packed;
#define XFS_IOC_SWAPEXT_32 _IOWR('X', 109, struct compat_xfs_swapext) #define XFS_IOC_SWAPEXT_32 _IOWR('X', 109, struct compat_xfs_swapext)
@ -143,15 +143,6 @@ typedef struct compat_xfs_fsop_attrmulti_handlereq {
#define XFS_IOC_ATTRMULTI_BY_HANDLE_32 \ #define XFS_IOC_ATTRMULTI_BY_HANDLE_32 \
_IOW('X', 123, struct compat_xfs_fsop_attrmulti_handlereq) _IOW('X', 123, struct compat_xfs_fsop_attrmulti_handlereq)
typedef struct compat_xfs_fsop_setdm_handlereq {
struct compat_xfs_fsop_handlereq hreq; /* handle information */
/* ptr to struct fsdmidata */
compat_uptr_t data; /* DMAPI data */
} compat_xfs_fsop_setdm_handlereq_t;
#define XFS_IOC_FSSETDM_BY_HANDLE_32 \
_IOW('X', 121, struct compat_xfs_fsop_setdm_handlereq)
#ifdef BROKEN_X86_ALIGNMENT #ifdef BROKEN_X86_ALIGNMENT
/* on ia32 l_start is on a 32-bit boundary */ /* on ia32 l_start is on a 32-bit boundary */
typedef struct compat_xfs_flock64 { typedef struct compat_xfs_flock64 {

File diff suppressed because it is too large Load Diff

View File

@ -11,13 +11,14 @@
struct xfs_inode; struct xfs_inode;
struct xfs_bmbt_irec; struct xfs_bmbt_irec;
int xfs_iomap_write_direct(struct xfs_inode *, xfs_off_t, size_t, int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb,
struct xfs_bmbt_irec *, int); xfs_fileoff_t count_fsb, struct xfs_bmbt_irec *imap);
int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool); int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip,
xfs_fileoff_t end_fsb);
int xfs_bmbt_to_iomap(struct xfs_inode *, struct iomap *, int xfs_bmbt_to_iomap(struct xfs_inode *, struct iomap *,
struct xfs_bmbt_irec *, u16); struct xfs_bmbt_irec *, u16);
xfs_extlen_t xfs_eof_alignment(struct xfs_inode *ip, xfs_extlen_t extsize);
static inline xfs_filblks_t static inline xfs_filblks_t
xfs_aligned_fsb_count( xfs_aligned_fsb_count(
@ -39,7 +40,9 @@ xfs_aligned_fsb_count(
return count_fsb; return count_fsb;
} }
extern const struct iomap_ops xfs_iomap_ops; extern const struct iomap_ops xfs_buffered_write_iomap_ops;
extern const struct iomap_ops xfs_direct_write_iomap_ops;
extern const struct iomap_ops xfs_read_iomap_ops;
extern const struct iomap_ops xfs_seek_iomap_ops; extern const struct iomap_ops xfs_seek_iomap_ops;
extern const struct iomap_ops xfs_xattr_iomap_ops; extern const struct iomap_ops xfs_xattr_iomap_ops;

View File

@ -20,6 +20,7 @@
#include "xfs_symlink.h" #include "xfs_symlink.h"
#include "xfs_dir2.h" #include "xfs_dir2.h"
#include "xfs_iomap.h" #include "xfs_iomap.h"
#include "xfs_error.h"
#include <linux/xattr.h> #include <linux/xattr.h>
#include <linux/posix_acl.h> #include <linux/posix_acl.h>
@ -470,20 +471,57 @@ xfs_vn_get_link_inline(
struct inode *inode, struct inode *inode,
struct delayed_call *done) struct delayed_call *done)
{ {
struct xfs_inode *ip = XFS_I(inode);
char *link; char *link;
ASSERT(XFS_I(inode)->i_df.if_flags & XFS_IFINLINE); ASSERT(ip->i_df.if_flags & XFS_IFINLINE);
/* /*
* The VFS crashes on a NULL pointer, so return -EFSCORRUPTED if * The VFS crashes on a NULL pointer, so return -EFSCORRUPTED if
* if_data is junk. * if_data is junk.
*/ */
link = XFS_I(inode)->i_df.if_u1.if_data; link = ip->i_df.if_u1.if_data;
if (!link) if (XFS_IS_CORRUPT(ip->i_mount, !link))
return ERR_PTR(-EFSCORRUPTED); return ERR_PTR(-EFSCORRUPTED);
return link; return link;
} }
static uint32_t
xfs_stat_blksize(
struct xfs_inode *ip)
{
struct xfs_mount *mp = ip->i_mount;
/*
* If the file blocks are being allocated from a realtime volume, then
* always return the realtime extent size.
*/
if (XFS_IS_REALTIME_INODE(ip))
return xfs_get_extsz_hint(ip) << mp->m_sb.sb_blocklog;
/*
* Allow large block sizes to be reported to userspace programs if the
* "largeio" mount option is used.
*
* If compatibility mode is specified, simply return the basic unit of
* caching so that we don't get inefficient read/modify/write I/O from
* user apps. Otherwise....
*
* If the underlying volume is a stripe, then return the stripe width in
* bytes as the recommended I/O size. It is not a stripe and we've set a
* default buffered I/O size, return that, otherwise return the compat
* default.
*/
if (mp->m_flags & XFS_MOUNT_LARGEIO) {
if (mp->m_swidth)
return mp->m_swidth << mp->m_sb.sb_blocklog;
if (mp->m_flags & XFS_MOUNT_ALLOCSIZE)
return 1U << mp->m_allocsize_log;
}
return PAGE_SIZE;
}
STATIC int STATIC int
xfs_vn_getattr( xfs_vn_getattr(
const struct path *path, const struct path *path,
@ -516,8 +554,7 @@ xfs_vn_getattr(
if (ip->i_d.di_version == 3) { if (ip->i_d.di_version == 3) {
if (request_mask & STATX_BTIME) { if (request_mask & STATX_BTIME) {
stat->result_mask |= STATX_BTIME; stat->result_mask |= STATX_BTIME;
stat->btime.tv_sec = ip->i_d.di_crtime.t_sec; stat->btime = ip->i_d.di_crtime;
stat->btime.tv_nsec = ip->i_d.di_crtime.t_nsec;
} }
} }
@ -543,16 +580,7 @@ xfs_vn_getattr(
stat->rdev = inode->i_rdev; stat->rdev = inode->i_rdev;
break; break;
default: default:
if (XFS_IS_REALTIME_INODE(ip)) { stat->blksize = xfs_stat_blksize(ip);
/*
* If the file blocks are being allocated from a
* realtime volume, then return the inode's realtime
* extent size or the realtime volume's extent size.
*/
stat->blksize =
xfs_get_extsz_hint(ip) << mp->m_sb.sb_blocklog;
} else
stat->blksize = xfs_preferred_iosize(mp);
stat->rdev = 0; stat->rdev = 0;
break; break;
} }
@ -664,7 +692,7 @@ xfs_setattr_nonsize(
ASSERT(gdqp == NULL); ASSERT(gdqp == NULL);
error = xfs_qm_vop_dqalloc(ip, xfs_kuid_to_uid(uid), error = xfs_qm_vop_dqalloc(ip, xfs_kuid_to_uid(uid),
xfs_kgid_to_gid(gid), xfs_kgid_to_gid(gid),
xfs_get_projid(ip), ip->i_d.di_projid,
qflags, &udqp, &gdqp, NULL); qflags, &udqp, &gdqp, NULL);
if (error) if (error)
return error; return error;
@ -883,10 +911,10 @@ xfs_setattr_size(
if (newsize > oldsize) { if (newsize > oldsize) {
trace_xfs_zero_eof(ip, oldsize, newsize - oldsize); trace_xfs_zero_eof(ip, oldsize, newsize - oldsize);
error = iomap_zero_range(inode, oldsize, newsize - oldsize, error = iomap_zero_range(inode, oldsize, newsize - oldsize,
&did_zeroing, &xfs_iomap_ops); &did_zeroing, &xfs_buffered_write_iomap_ops);
} else { } else {
error = iomap_truncate_page(inode, newsize, &did_zeroing, error = iomap_truncate_page(inode, newsize, &did_zeroing,
&xfs_iomap_ops); &xfs_buffered_write_iomap_ops);
} }
if (error) if (error)
@ -1114,7 +1142,7 @@ xfs_vn_fiemap(
&xfs_xattr_iomap_ops); &xfs_xattr_iomap_ops);
} else { } else {
error = iomap_fiemap(inode, fieinfo, start, length, error = iomap_fiemap(inode, fieinfo, start, length,
&xfs_iomap_ops); &xfs_read_iomap_ops);
} }
xfs_iunlock(XFS_I(inode), XFS_IOLOCK_SHARED); xfs_iunlock(XFS_I(inode), XFS_IOLOCK_SHARED);
@ -1227,7 +1255,7 @@ xfs_inode_supports_dax(
return false; return false;
/* Device has to support DAX too. */ /* Device has to support DAX too. */
return xfs_find_daxdev_for_inode(VFS_I(ip)) != NULL; return xfs_inode_buftarg(ip)->bt_daxdev != NULL;
} }
STATIC void STATIC void
@ -1290,9 +1318,7 @@ xfs_setup_inode(
lockdep_set_class(&inode->i_rwsem, lockdep_set_class(&inode->i_rwsem,
&inode->i_sb->s_type->i_mutex_dir_key); &inode->i_sb->s_type->i_mutex_dir_key);
lockdep_set_class(&ip->i_lock.mr_lock, &xfs_dir_ilock_class); lockdep_set_class(&ip->i_lock.mr_lock, &xfs_dir_ilock_class);
ip->d_ops = ip->i_mount->m_dir_inode_ops;
} else { } else {
ip->d_ops = ip->i_mount->m_nondir_inode_ops;
lockdep_set_class(&ip->i_lock.mr_lock, &xfs_nondir_ilock_class); lockdep_set_class(&ip->i_lock.mr_lock, &xfs_nondir_ilock_class);
} }

View File

@ -84,7 +84,7 @@ xfs_bulkstat_one_int(
/* xfs_iget returns the following without needing /* xfs_iget returns the following without needing
* further change. * further change.
*/ */
buf->bs_projectid = xfs_get_projid(ip); buf->bs_projectid = ip->i_d.di_projid;
buf->bs_ino = ino; buf->bs_ino = ino;
buf->bs_uid = dic->di_uid; buf->bs_uid = dic->di_uid;
buf->bs_gid = dic->di_gid; buf->bs_gid = dic->di_gid;
@ -97,8 +97,8 @@ xfs_bulkstat_one_int(
buf->bs_mtime_nsec = inode->i_mtime.tv_nsec; buf->bs_mtime_nsec = inode->i_mtime.tv_nsec;
buf->bs_ctime = inode->i_ctime.tv_sec; buf->bs_ctime = inode->i_ctime.tv_sec;
buf->bs_ctime_nsec = inode->i_ctime.tv_nsec; buf->bs_ctime_nsec = inode->i_ctime.tv_nsec;
buf->bs_btime = dic->di_crtime.t_sec; buf->bs_btime = dic->di_crtime.tv_sec;
buf->bs_btime_nsec = dic->di_crtime.t_nsec; buf->bs_btime_nsec = dic->di_crtime.tv_nsec;
buf->bs_gen = inode->i_generation; buf->bs_gen = inode->i_generation;
buf->bs_mode = inode->i_mode; buf->bs_mode = inode->i_mode;

View File

@ -298,7 +298,8 @@ xfs_iwalk_ag_start(
error = xfs_inobt_get_rec(*curpp, irec, has_more); error = xfs_inobt_get_rec(*curpp, irec, has_more);
if (error) if (error)
return error; return error;
XFS_WANT_CORRUPTED_RETURN(mp, *has_more == 1); if (XFS_IS_CORRUPT(mp, *has_more != 1))
return -EFSCORRUPTED;
/* /*
* If the LE lookup yielded an inobt record before the cursor position, * If the LE lookup yielded an inobt record before the cursor position,

View File

@ -223,26 +223,32 @@ int xfs_rw_bdev(struct block_device *bdev, sector_t sector, unsigned int count,
char *data, unsigned int op); char *data, unsigned int op);
#define ASSERT_ALWAYS(expr) \ #define ASSERT_ALWAYS(expr) \
(likely(expr) ? (void)0 : assfail(#expr, __FILE__, __LINE__)) (likely(expr) ? (void)0 : assfail(NULL, #expr, __FILE__, __LINE__))
#ifdef DEBUG #ifdef DEBUG
#define ASSERT(expr) \ #define ASSERT(expr) \
(likely(expr) ? (void)0 : assfail(#expr, __FILE__, __LINE__)) (likely(expr) ? (void)0 : assfail(NULL, #expr, __FILE__, __LINE__))
#else /* !DEBUG */ #else /* !DEBUG */
#ifdef XFS_WARN #ifdef XFS_WARN
#define ASSERT(expr) \ #define ASSERT(expr) \
(likely(expr) ? (void)0 : asswarn(#expr, __FILE__, __LINE__)) (likely(expr) ? (void)0 : asswarn(NULL, #expr, __FILE__, __LINE__))
#else /* !DEBUG && !XFS_WARN */ #else /* !DEBUG && !XFS_WARN */
#define ASSERT(expr) ((void)0) #define ASSERT(expr) ((void)0)
#endif /* XFS_WARN */ #endif /* XFS_WARN */
#endif /* DEBUG */ #endif /* DEBUG */
#define XFS_IS_CORRUPT(mp, expr) \
(unlikely(expr) ? xfs_corruption_error(#expr, XFS_ERRLEVEL_LOW, (mp), \
NULL, 0, __FILE__, __LINE__, \
__this_address), \
true : false)
#define STATIC static noinline #define STATIC static noinline
#ifdef CONFIG_XFS_RT #ifdef CONFIG_XFS_RT

View File

@ -57,10 +57,6 @@ xlog_state_get_iclog_space(
struct xlog_ticket *ticket, struct xlog_ticket *ticket,
int *continued_write, int *continued_write,
int *logoffsetp); int *logoffsetp);
STATIC int
xlog_state_release_iclog(
struct xlog *log,
struct xlog_in_core *iclog);
STATIC void STATIC void
xlog_state_switch_iclogs( xlog_state_switch_iclogs(
struct xlog *log, struct xlog *log,
@ -83,7 +79,10 @@ STATIC void
xlog_ungrant_log_space( xlog_ungrant_log_space(
struct xlog *log, struct xlog *log,
struct xlog_ticket *ticket); struct xlog_ticket *ticket);
STATIC void
xlog_sync(
struct xlog *log,
struct xlog_in_core *iclog);
#if defined(DEBUG) #if defined(DEBUG)
STATIC void STATIC void
xlog_verify_dest_ptr( xlog_verify_dest_ptr(
@ -552,16 +551,71 @@ xfs_log_done(
return lsn; return lsn;
} }
int static bool
xfs_log_release_iclog( __xlog_state_release_iclog(
struct xfs_mount *mp, struct xlog *log,
struct xlog_in_core *iclog) struct xlog_in_core *iclog)
{ {
if (xlog_state_release_iclog(mp->m_log, iclog)) { lockdep_assert_held(&log->l_icloglock);
if (iclog->ic_state == XLOG_STATE_WANT_SYNC) {
/* update tail before writing to iclog */
xfs_lsn_t tail_lsn = xlog_assign_tail_lsn(log->l_mp);
iclog->ic_state = XLOG_STATE_SYNCING;
iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn);
xlog_verify_tail_lsn(log, iclog, tail_lsn);
/* cycle incremented when incrementing curr_block */
return true;
}
ASSERT(iclog->ic_state == XLOG_STATE_ACTIVE);
return false;
}
/*
* Flush iclog to disk if this is the last reference to the given iclog and the
* it is in the WANT_SYNC state.
*/
static int
xlog_state_release_iclog(
struct xlog *log,
struct xlog_in_core *iclog)
{
lockdep_assert_held(&log->l_icloglock);
if (iclog->ic_state == XLOG_STATE_IOERROR)
return -EIO;
if (atomic_dec_and_test(&iclog->ic_refcnt) &&
__xlog_state_release_iclog(log, iclog)) {
spin_unlock(&log->l_icloglock);
xlog_sync(log, iclog);
spin_lock(&log->l_icloglock);
}
return 0;
}
int
xfs_log_release_iclog(
struct xfs_mount *mp,
struct xlog_in_core *iclog)
{
struct xlog *log = mp->m_log;
bool sync;
if (iclog->ic_state == XLOG_STATE_IOERROR) {
xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR); xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR);
return -EIO; return -EIO;
} }
if (atomic_dec_and_lock(&iclog->ic_refcnt, &log->l_icloglock)) {
sync = __xlog_state_release_iclog(log, iclog);
spin_unlock(&log->l_icloglock);
if (sync)
xlog_sync(log, iclog);
}
return 0; return 0;
} }
@ -866,10 +920,7 @@ out_err:
iclog = log->l_iclog; iclog = log->l_iclog;
atomic_inc(&iclog->ic_refcnt); atomic_inc(&iclog->ic_refcnt);
xlog_state_want_sync(log, iclog); xlog_state_want_sync(log, iclog);
spin_unlock(&log->l_icloglock);
error = xlog_state_release_iclog(log, iclog); error = xlog_state_release_iclog(log, iclog);
spin_lock(&log->l_icloglock);
switch (iclog->ic_state) { switch (iclog->ic_state) {
default: default:
if (!XLOG_FORCED_SHUTDOWN(log)) { if (!XLOG_FORCED_SHUTDOWN(log)) {
@ -924,8 +975,8 @@ xfs_log_unmount_write(xfs_mount_t *mp)
#ifdef DEBUG #ifdef DEBUG
first_iclog = iclog = log->l_iclog; first_iclog = iclog = log->l_iclog;
do { do {
if (!(iclog->ic_state & XLOG_STATE_IOERROR)) { if (iclog->ic_state != XLOG_STATE_IOERROR) {
ASSERT(iclog->ic_state & XLOG_STATE_ACTIVE); ASSERT(iclog->ic_state == XLOG_STATE_ACTIVE);
ASSERT(iclog->ic_offset == 0); ASSERT(iclog->ic_offset == 0);
} }
iclog = iclog->ic_next; iclog = iclog->ic_next;
@ -950,21 +1001,17 @@ xfs_log_unmount_write(xfs_mount_t *mp)
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
iclog = log->l_iclog; iclog = log->l_iclog;
atomic_inc(&iclog->ic_refcnt); atomic_inc(&iclog->ic_refcnt);
xlog_state_want_sync(log, iclog); xlog_state_want_sync(log, iclog);
spin_unlock(&log->l_icloglock);
error = xlog_state_release_iclog(log, iclog); error = xlog_state_release_iclog(log, iclog);
switch (iclog->ic_state) {
spin_lock(&log->l_icloglock); case XLOG_STATE_ACTIVE:
case XLOG_STATE_DIRTY:
if ( ! ( iclog->ic_state == XLOG_STATE_ACTIVE case XLOG_STATE_IOERROR:
|| iclog->ic_state == XLOG_STATE_DIRTY
|| iclog->ic_state == XLOG_STATE_IOERROR) ) {
xlog_wait(&iclog->ic_force_wait,
&log->l_icloglock);
} else {
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
break;
default:
xlog_wait(&iclog->ic_force_wait, &log->l_icloglock);
break;
} }
} }
@ -1254,7 +1301,7 @@ xlog_ioend_work(
* didn't succeed. * didn't succeed.
*/ */
aborted = true; aborted = true;
} else if (iclog->ic_state & XLOG_STATE_IOERROR) { } else if (iclog->ic_state == XLOG_STATE_IOERROR) {
aborted = true; aborted = true;
} }
@ -1479,7 +1526,7 @@ xlog_alloc_log(
log->l_ioend_workqueue = alloc_workqueue("xfs-log/%s", log->l_ioend_workqueue = alloc_workqueue("xfs-log/%s",
WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_HIGHPRI, 0, WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_HIGHPRI, 0,
mp->m_fsname); mp->m_super->s_id);
if (!log->l_ioend_workqueue) if (!log->l_ioend_workqueue)
goto out_free_iclog; goto out_free_iclog;
@ -1727,7 +1774,7 @@ xlog_write_iclog(
* across the log IO to archieve that. * across the log IO to archieve that.
*/ */
down(&iclog->ic_sema); down(&iclog->ic_sema);
if (unlikely(iclog->ic_state & XLOG_STATE_IOERROR)) { if (unlikely(iclog->ic_state == XLOG_STATE_IOERROR)) {
/* /*
* It would seem logical to return EIO here, but we rely on * It would seem logical to return EIO here, but we rely on
* the log state machine to propagate I/O errors instead of * the log state machine to propagate I/O errors instead of
@ -1735,13 +1782,11 @@ xlog_write_iclog(
* the buffer manually, the code needs to be kept in sync * the buffer manually, the code needs to be kept in sync
* with the I/O completion path. * with the I/O completion path.
*/ */
xlog_state_done_syncing(iclog, XFS_LI_ABORTED); xlog_state_done_syncing(iclog, true);
up(&iclog->ic_sema); up(&iclog->ic_sema);
return; return;
} }
iclog->ic_io_size = count;
bio_init(&iclog->ic_bio, iclog->ic_bvec, howmany(count, PAGE_SIZE)); bio_init(&iclog->ic_bio, iclog->ic_bvec, howmany(count, PAGE_SIZE));
bio_set_dev(&iclog->ic_bio, log->l_targ->bt_bdev); bio_set_dev(&iclog->ic_bio, log->l_targ->bt_bdev);
iclog->ic_bio.bi_iter.bi_sector = log->l_logBBstart + bno; iclog->ic_bio.bi_iter.bi_sector = log->l_logBBstart + bno;
@ -1751,9 +1796,9 @@ xlog_write_iclog(
if (need_flush) if (need_flush)
iclog->ic_bio.bi_opf |= REQ_PREFLUSH; iclog->ic_bio.bi_opf |= REQ_PREFLUSH;
xlog_map_iclog_data(&iclog->ic_bio, iclog->ic_data, iclog->ic_io_size); xlog_map_iclog_data(&iclog->ic_bio, iclog->ic_data, count);
if (is_vmalloc_addr(iclog->ic_data)) if (is_vmalloc_addr(iclog->ic_data))
flush_kernel_vmap_range(iclog->ic_data, iclog->ic_io_size); flush_kernel_vmap_range(iclog->ic_data, count);
/* /*
* If this log buffer would straddle the end of the log we will have * If this log buffer would straddle the end of the log we will have
@ -1969,7 +2014,6 @@ xlog_dealloc_log(
/* /*
* Update counters atomically now that memcpy is done. * Update counters atomically now that memcpy is done.
*/ */
/* ARGSUSED */
static inline void static inline void
xlog_state_finish_copy( xlog_state_finish_copy(
struct xlog *log, struct xlog *log,
@ -1977,16 +2021,11 @@ xlog_state_finish_copy(
int record_cnt, int record_cnt,
int copy_bytes) int copy_bytes)
{ {
spin_lock(&log->l_icloglock); lockdep_assert_held(&log->l_icloglock);
be32_add_cpu(&iclog->ic_header.h_num_logops, record_cnt); be32_add_cpu(&iclog->ic_header.h_num_logops, record_cnt);
iclog->ic_offset += copy_bytes; iclog->ic_offset += copy_bytes;
}
spin_unlock(&log->l_icloglock);
} /* xlog_state_finish_copy */
/* /*
* print out info relating to regions written which consume * print out info relating to regions written which consume
@ -2263,15 +2302,18 @@ xlog_write_copy_finish(
int log_offset, int log_offset,
struct xlog_in_core **commit_iclog) struct xlog_in_core **commit_iclog)
{ {
int error;
if (*partial_copy) { if (*partial_copy) {
/* /*
* This iclog has already been marked WANT_SYNC by * This iclog has already been marked WANT_SYNC by
* xlog_state_get_iclog_space. * xlog_state_get_iclog_space.
*/ */
spin_lock(&log->l_icloglock);
xlog_state_finish_copy(log, iclog, *record_cnt, *data_cnt); xlog_state_finish_copy(log, iclog, *record_cnt, *data_cnt);
*record_cnt = 0; *record_cnt = 0;
*data_cnt = 0; *data_cnt = 0;
return xlog_state_release_iclog(log, iclog); goto release_iclog;
} }
*partial_copy = 0; *partial_copy = 0;
@ -2279,21 +2321,25 @@ xlog_write_copy_finish(
if (iclog->ic_size - log_offset <= sizeof(xlog_op_header_t)) { if (iclog->ic_size - log_offset <= sizeof(xlog_op_header_t)) {
/* no more space in this iclog - push it. */ /* no more space in this iclog - push it. */
spin_lock(&log->l_icloglock);
xlog_state_finish_copy(log, iclog, *record_cnt, *data_cnt); xlog_state_finish_copy(log, iclog, *record_cnt, *data_cnt);
*record_cnt = 0; *record_cnt = 0;
*data_cnt = 0; *data_cnt = 0;
spin_lock(&log->l_icloglock);
xlog_state_want_sync(log, iclog); xlog_state_want_sync(log, iclog);
spin_unlock(&log->l_icloglock);
if (!commit_iclog) if (!commit_iclog)
return xlog_state_release_iclog(log, iclog); goto release_iclog;
spin_unlock(&log->l_icloglock);
ASSERT(flags & XLOG_COMMIT_TRANS); ASSERT(flags & XLOG_COMMIT_TRANS);
*commit_iclog = iclog; *commit_iclog = iclog;
} }
return 0; return 0;
release_iclog:
error = xlog_state_release_iclog(log, iclog);
spin_unlock(&log->l_icloglock);
return error;
} }
/* /*
@ -2355,7 +2401,7 @@ xlog_write(
int contwr = 0; int contwr = 0;
int record_cnt = 0; int record_cnt = 0;
int data_cnt = 0; int data_cnt = 0;
int error; int error = 0;
*start_lsn = 0; *start_lsn = 0;
@ -2506,13 +2552,17 @@ next_lv:
ASSERT(len == 0); ASSERT(len == 0);
spin_lock(&log->l_icloglock);
xlog_state_finish_copy(log, iclog, record_cnt, data_cnt); xlog_state_finish_copy(log, iclog, record_cnt, data_cnt);
if (!commit_iclog) if (commit_iclog) {
return xlog_state_release_iclog(log, iclog); ASSERT(flags & XLOG_COMMIT_TRANS);
*commit_iclog = iclog;
} else {
error = xlog_state_release_iclog(log, iclog);
}
spin_unlock(&log->l_icloglock);
ASSERT(flags & XLOG_COMMIT_TRANS); return error;
*commit_iclog = iclog;
return 0;
} }
@ -2548,7 +2598,7 @@ xlog_state_clean_iclog(
int changed = 0; int changed = 0;
/* Prepare the completed iclog. */ /* Prepare the completed iclog. */
if (!(dirty_iclog->ic_state & XLOG_STATE_IOERROR)) if (dirty_iclog->ic_state != XLOG_STATE_IOERROR)
dirty_iclog->ic_state = XLOG_STATE_DIRTY; dirty_iclog->ic_state = XLOG_STATE_DIRTY;
/* Walk all the iclogs to update the ordered active state. */ /* Walk all the iclogs to update the ordered active state. */
@ -2639,7 +2689,8 @@ xlog_get_lowest_lsn(
xfs_lsn_t lowest_lsn = 0, lsn; xfs_lsn_t lowest_lsn = 0, lsn;
do { do {
if (iclog->ic_state & (XLOG_STATE_ACTIVE | XLOG_STATE_DIRTY)) if (iclog->ic_state == XLOG_STATE_ACTIVE ||
iclog->ic_state == XLOG_STATE_DIRTY)
continue; continue;
lsn = be64_to_cpu(iclog->ic_header.h_lsn); lsn = be64_to_cpu(iclog->ic_header.h_lsn);
@ -2699,61 +2750,48 @@ static bool
xlog_state_iodone_process_iclog( xlog_state_iodone_process_iclog(
struct xlog *log, struct xlog *log,
struct xlog_in_core *iclog, struct xlog_in_core *iclog,
struct xlog_in_core *completed_iclog,
bool *ioerror) bool *ioerror)
{ {
xfs_lsn_t lowest_lsn; xfs_lsn_t lowest_lsn;
xfs_lsn_t header_lsn; xfs_lsn_t header_lsn;
/* Skip all iclogs in the ACTIVE & DIRTY states */ switch (iclog->ic_state) {
if (iclog->ic_state & (XLOG_STATE_ACTIVE | XLOG_STATE_DIRTY)) case XLOG_STATE_ACTIVE:
case XLOG_STATE_DIRTY:
/*
* Skip all iclogs in the ACTIVE & DIRTY states:
*/
return false; return false;
case XLOG_STATE_IOERROR:
/* /*
* Between marking a filesystem SHUTDOWN and stopping the log, we do * Between marking a filesystem SHUTDOWN and stopping the log,
* flush all iclogs to disk (if there wasn't a log I/O error). So, we do * we do flush all iclogs to disk (if there wasn't a log I/O
* want things to go smoothly in case of just a SHUTDOWN w/o a * error). So, we do want things to go smoothly in case of just
* LOG_IO_ERROR. * a SHUTDOWN w/o a LOG_IO_ERROR.
*/ */
if (iclog->ic_state & XLOG_STATE_IOERROR) {
*ioerror = true; *ioerror = true;
return false; return false;
} case XLOG_STATE_DONE_SYNC:
/*
/* * Now that we have an iclog that is in the DONE_SYNC state, do
* Can only perform callbacks in order. Since this iclog is not in the * one more check here to see if we have chased our tail around.
* DONE_SYNC/ DO_CALLBACK state, we skip the rest and just try to clean * If this is not the lowest lsn iclog, then we will leave it
* up. If we set our iclog to DO_CALLBACK, we will not process it when * for another completion to process.
* we retry since a previous iclog is in the CALLBACK and the state */
* cannot change since we are holding the l_icloglock. header_lsn = be64_to_cpu(iclog->ic_header.h_lsn);
*/ lowest_lsn = xlog_get_lowest_lsn(log);
if (!(iclog->ic_state & if (lowest_lsn && XFS_LSN_CMP(lowest_lsn, header_lsn) < 0)
(XLOG_STATE_DONE_SYNC | XLOG_STATE_DO_CALLBACK))) { return false;
if (completed_iclog && xlog_state_set_callback(log, iclog, header_lsn);
(completed_iclog->ic_state == XLOG_STATE_DONE_SYNC)) { return false;
completed_iclog->ic_state = XLOG_STATE_DO_CALLBACK; default:
} /*
* Can only perform callbacks in order. Since this iclog is not
* in the DONE_SYNC state, we skip the rest and just try to
* clean up.
*/
return true; return true;
} }
/*
* We now have an iclog that is in either the DO_CALLBACK or DONE_SYNC
* states. The other states (WANT_SYNC, SYNCING, or CALLBACK were caught
* by the above if and are going to clean (i.e. we aren't doing their
* callbacks) see the above if.
*
* We will do one more check here to see if we have chased our tail
* around. If this is not the lowest lsn iclog, then we will leave it
* for another completion to process.
*/
header_lsn = be64_to_cpu(iclog->ic_header.h_lsn);
lowest_lsn = xlog_get_lowest_lsn(log);
if (lowest_lsn && XFS_LSN_CMP(lowest_lsn, header_lsn) < 0)
return false;
xlog_state_set_callback(log, iclog, header_lsn);
return false;
} }
/* /*
@ -2770,6 +2808,8 @@ xlog_state_do_iclog_callbacks(
struct xlog *log, struct xlog *log,
struct xlog_in_core *iclog, struct xlog_in_core *iclog,
bool aborted) bool aborted)
__releases(&log->l_icloglock)
__acquires(&log->l_icloglock)
{ {
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
spin_lock(&iclog->ic_callback_lock); spin_lock(&iclog->ic_callback_lock);
@ -2792,57 +2832,13 @@ xlog_state_do_iclog_callbacks(
spin_unlock(&iclog->ic_callback_lock); spin_unlock(&iclog->ic_callback_lock);
} }
#ifdef DEBUG
/*
* Make one last gasp attempt to see if iclogs are being left in limbo. If the
* above loop finds an iclog earlier than the current iclog and in one of the
* syncing states, the current iclog is put into DO_CALLBACK and the callbacks
* are deferred to the completion of the earlier iclog. Walk the iclogs in order
* and make sure that no iclog is in DO_CALLBACK unless an earlier iclog is in
* one of the syncing states.
*
* Note that SYNCING|IOERROR is a valid state so we cannot just check for
* ic_state == SYNCING.
*/
static void
xlog_state_callback_check_state(
struct xlog *log)
{
struct xlog_in_core *first_iclog = log->l_iclog;
struct xlog_in_core *iclog = first_iclog;
do {
ASSERT(iclog->ic_state != XLOG_STATE_DO_CALLBACK);
/*
* Terminate the loop if iclogs are found in states
* which will cause other threads to clean up iclogs.
*
* SYNCING - i/o completion will go through logs
* DONE_SYNC - interrupt thread should be waiting for
* l_icloglock
* IOERROR - give up hope all ye who enter here
*/
if (iclog->ic_state == XLOG_STATE_WANT_SYNC ||
iclog->ic_state & XLOG_STATE_SYNCING ||
iclog->ic_state == XLOG_STATE_DONE_SYNC ||
iclog->ic_state == XLOG_STATE_IOERROR )
break;
iclog = iclog->ic_next;
} while (first_iclog != iclog);
}
#else
#define xlog_state_callback_check_state(l) ((void)0)
#endif
STATIC void STATIC void
xlog_state_do_callback( xlog_state_do_callback(
struct xlog *log, struct xlog *log,
bool aborted, bool aborted)
struct xlog_in_core *ciclog)
{ {
struct xlog_in_core *iclog; struct xlog_in_core *iclog;
struct xlog_in_core *first_iclog; struct xlog_in_core *first_iclog;
bool did_callbacks = false;
bool cycled_icloglock; bool cycled_icloglock;
bool ioerror; bool ioerror;
int flushcnt = 0; int flushcnt = 0;
@ -2866,11 +2862,11 @@ xlog_state_do_callback(
do { do {
if (xlog_state_iodone_process_iclog(log, iclog, if (xlog_state_iodone_process_iclog(log, iclog,
ciclog, &ioerror)) &ioerror))
break; break;
if (!(iclog->ic_state & if (iclog->ic_state != XLOG_STATE_CALLBACK &&
(XLOG_STATE_CALLBACK | XLOG_STATE_IOERROR))) { iclog->ic_state != XLOG_STATE_IOERROR) {
iclog = iclog->ic_next; iclog = iclog->ic_next;
continue; continue;
} }
@ -2886,8 +2882,6 @@ xlog_state_do_callback(
iclog = iclog->ic_next; iclog = iclog->ic_next;
} while (first_iclog != iclog); } while (first_iclog != iclog);
did_callbacks |= cycled_icloglock;
if (repeats > 5000) { if (repeats > 5000) {
flushcnt += repeats; flushcnt += repeats;
repeats = 0; repeats = 0;
@ -2897,10 +2891,8 @@ xlog_state_do_callback(
} }
} while (!ioerror && cycled_icloglock); } while (!ioerror && cycled_icloglock);
if (did_callbacks) if (log->l_iclog->ic_state == XLOG_STATE_ACTIVE ||
xlog_state_callback_check_state(log); log->l_iclog->ic_state == XLOG_STATE_IOERROR)
if (log->l_iclog->ic_state & (XLOG_STATE_ACTIVE|XLOG_STATE_IOERROR))
wake_up_all(&log->l_flush_wait); wake_up_all(&log->l_flush_wait);
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
@ -2929,8 +2921,6 @@ xlog_state_done_syncing(
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
ASSERT(iclog->ic_state == XLOG_STATE_SYNCING ||
iclog->ic_state == XLOG_STATE_IOERROR);
ASSERT(atomic_read(&iclog->ic_refcnt) == 0); ASSERT(atomic_read(&iclog->ic_refcnt) == 0);
/* /*
@ -2939,8 +2929,10 @@ xlog_state_done_syncing(
* and none should ever be attempted to be written to disk * and none should ever be attempted to be written to disk
* again. * again.
*/ */
if (iclog->ic_state != XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_SYNCING)
iclog->ic_state = XLOG_STATE_DONE_SYNC; iclog->ic_state = XLOG_STATE_DONE_SYNC;
else
ASSERT(iclog->ic_state == XLOG_STATE_IOERROR);
/* /*
* Someone could be sleeping prior to writing out the next * Someone could be sleeping prior to writing out the next
@ -2949,7 +2941,7 @@ xlog_state_done_syncing(
*/ */
wake_up_all(&iclog->ic_write_wait); wake_up_all(&iclog->ic_write_wait);
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
xlog_state_do_callback(log, aborted, iclog); /* also cleans log */ xlog_state_do_callback(log, aborted); /* also cleans log */
} /* xlog_state_done_syncing */ } /* xlog_state_done_syncing */
@ -2983,7 +2975,6 @@ xlog_state_get_iclog_space(
int log_offset; int log_offset;
xlog_rec_header_t *head; xlog_rec_header_t *head;
xlog_in_core_t *iclog; xlog_in_core_t *iclog;
int error;
restart: restart:
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
@ -3032,24 +3023,22 @@ restart:
* can fit into remaining data section. * can fit into remaining data section.
*/ */
if (iclog->ic_size - iclog->ic_offset < 2*sizeof(xlog_op_header_t)) { if (iclog->ic_size - iclog->ic_offset < 2*sizeof(xlog_op_header_t)) {
int error = 0;
xlog_state_switch_iclogs(log, iclog, iclog->ic_size); xlog_state_switch_iclogs(log, iclog, iclog->ic_size);
/* /*
* If I'm the only one writing to this iclog, sync it to disk. * If we are the only one writing to this iclog, sync it to
* We need to do an atomic compare and decrement here to avoid * disk. We need to do an atomic compare and decrement here to
* racing with concurrent atomic_dec_and_lock() calls in * avoid racing with concurrent atomic_dec_and_lock() calls in
* xlog_state_release_iclog() when there is more than one * xlog_state_release_iclog() when there is more than one
* reference to the iclog. * reference to the iclog.
*/ */
if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1)) { if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1))
/* we are the only one */
spin_unlock(&log->l_icloglock);
error = xlog_state_release_iclog(log, iclog); error = xlog_state_release_iclog(log, iclog);
if (error) spin_unlock(&log->l_icloglock);
return error; if (error)
} else { return error;
spin_unlock(&log->l_icloglock);
}
goto restart; goto restart;
} }
@ -3160,60 +3149,6 @@ xlog_ungrant_log_space(
xfs_log_space_wake(log->l_mp); xfs_log_space_wake(log->l_mp);
} }
/*
* Flush iclog to disk if this is the last reference to the given iclog and
* the WANT_SYNC bit is set.
*
* When this function is entered, the iclog is not necessarily in the
* WANT_SYNC state. It may be sitting around waiting to get filled.
*
*
*/
STATIC int
xlog_state_release_iclog(
struct xlog *log,
struct xlog_in_core *iclog)
{
int sync = 0; /* do we sync? */
if (iclog->ic_state & XLOG_STATE_IOERROR)
return -EIO;
ASSERT(atomic_read(&iclog->ic_refcnt) > 0);
if (!atomic_dec_and_lock(&iclog->ic_refcnt, &log->l_icloglock))
return 0;
if (iclog->ic_state & XLOG_STATE_IOERROR) {
spin_unlock(&log->l_icloglock);
return -EIO;
}
ASSERT(iclog->ic_state == XLOG_STATE_ACTIVE ||
iclog->ic_state == XLOG_STATE_WANT_SYNC);
if (iclog->ic_state == XLOG_STATE_WANT_SYNC) {
/* update tail before writing to iclog */
xfs_lsn_t tail_lsn = xlog_assign_tail_lsn(log->l_mp);
sync++;
iclog->ic_state = XLOG_STATE_SYNCING;
iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn);
xlog_verify_tail_lsn(log, iclog, tail_lsn);
/* cycle incremented when incrementing curr_block */
}
spin_unlock(&log->l_icloglock);
/*
* We let the log lock go, so it's possible that we hit a log I/O
* error or some other SHUTDOWN condition that marks the iclog
* as XLOG_STATE_IOERROR before the bwrite. However, we know that
* this iclog has consistent data, so we ignore IOERROR
* flags after this point.
*/
if (sync)
xlog_sync(log, iclog);
return 0;
} /* xlog_state_release_iclog */
/* /*
* This routine will mark the current iclog in the ring as WANT_SYNC * This routine will mark the current iclog in the ring as WANT_SYNC
* and move the current iclog pointer to the next iclog in the ring. * and move the current iclog pointer to the next iclog in the ring.
@ -3307,7 +3242,7 @@ xfs_log_force(
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
iclog = log->l_iclog; iclog = log->l_iclog;
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
goto out_error; goto out_error;
if (iclog->ic_state == XLOG_STATE_DIRTY || if (iclog->ic_state == XLOG_STATE_DIRTY ||
@ -3337,12 +3272,9 @@ xfs_log_force(
atomic_inc(&iclog->ic_refcnt); atomic_inc(&iclog->ic_refcnt);
lsn = be64_to_cpu(iclog->ic_header.h_lsn); lsn = be64_to_cpu(iclog->ic_header.h_lsn);
xlog_state_switch_iclogs(log, iclog, 0); xlog_state_switch_iclogs(log, iclog, 0);
spin_unlock(&log->l_icloglock);
if (xlog_state_release_iclog(log, iclog)) if (xlog_state_release_iclog(log, iclog))
return -EIO; goto out_error;
spin_lock(&log->l_icloglock);
if (be64_to_cpu(iclog->ic_header.h_lsn) != lsn || if (be64_to_cpu(iclog->ic_header.h_lsn) != lsn ||
iclog->ic_state == XLOG_STATE_DIRTY) iclog->ic_state == XLOG_STATE_DIRTY)
goto out_unlock; goto out_unlock;
@ -3367,11 +3299,11 @@ xfs_log_force(
if (!(flags & XFS_LOG_SYNC)) if (!(flags & XFS_LOG_SYNC))
goto out_unlock; goto out_unlock;
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
goto out_error; goto out_error;
XFS_STATS_INC(mp, xs_log_force_sleep); XFS_STATS_INC(mp, xs_log_force_sleep);
xlog_wait(&iclog->ic_force_wait, &log->l_icloglock); xlog_wait(&iclog->ic_force_wait, &log->l_icloglock);
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
return -EIO; return -EIO;
return 0; return 0;
@ -3396,7 +3328,7 @@ __xfs_log_force_lsn(
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
iclog = log->l_iclog; iclog = log->l_iclog;
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
goto out_error; goto out_error;
while (be64_to_cpu(iclog->ic_header.h_lsn) != lsn) { while (be64_to_cpu(iclog->ic_header.h_lsn) != lsn) {
@ -3425,10 +3357,8 @@ __xfs_log_force_lsn(
* will go out then. * will go out then.
*/ */
if (!already_slept && if (!already_slept &&
(iclog->ic_prev->ic_state & (iclog->ic_prev->ic_state == XLOG_STATE_WANT_SYNC ||
(XLOG_STATE_WANT_SYNC | XLOG_STATE_SYNCING))) { iclog->ic_prev->ic_state == XLOG_STATE_SYNCING)) {
ASSERT(!(iclog->ic_state & XLOG_STATE_IOERROR));
XFS_STATS_INC(mp, xs_log_force_sleep); XFS_STATS_INC(mp, xs_log_force_sleep);
xlog_wait(&iclog->ic_prev->ic_write_wait, xlog_wait(&iclog->ic_prev->ic_write_wait,
@ -3437,24 +3367,23 @@ __xfs_log_force_lsn(
} }
atomic_inc(&iclog->ic_refcnt); atomic_inc(&iclog->ic_refcnt);
xlog_state_switch_iclogs(log, iclog, 0); xlog_state_switch_iclogs(log, iclog, 0);
spin_unlock(&log->l_icloglock);
if (xlog_state_release_iclog(log, iclog)) if (xlog_state_release_iclog(log, iclog))
return -EIO; goto out_error;
if (log_flushed) if (log_flushed)
*log_flushed = 1; *log_flushed = 1;
spin_lock(&log->l_icloglock);
} }
if (!(flags & XFS_LOG_SYNC) || if (!(flags & XFS_LOG_SYNC) ||
(iclog->ic_state & (XLOG_STATE_ACTIVE | XLOG_STATE_DIRTY))) (iclog->ic_state == XLOG_STATE_ACTIVE ||
iclog->ic_state == XLOG_STATE_DIRTY))
goto out_unlock; goto out_unlock;
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
goto out_error; goto out_error;
XFS_STATS_INC(mp, xs_log_force_sleep); XFS_STATS_INC(mp, xs_log_force_sleep);
xlog_wait(&iclog->ic_force_wait, &log->l_icloglock); xlog_wait(&iclog->ic_force_wait, &log->l_icloglock);
if (iclog->ic_state & XLOG_STATE_IOERROR) if (iclog->ic_state == XLOG_STATE_IOERROR)
return -EIO; return -EIO;
return 0; return 0;
@ -3517,8 +3446,8 @@ xlog_state_want_sync(
if (iclog->ic_state == XLOG_STATE_ACTIVE) { if (iclog->ic_state == XLOG_STATE_ACTIVE) {
xlog_state_switch_iclogs(log, iclog, 0); xlog_state_switch_iclogs(log, iclog, 0);
} else { } else {
ASSERT(iclog->ic_state & ASSERT(iclog->ic_state == XLOG_STATE_WANT_SYNC ||
(XLOG_STATE_WANT_SYNC|XLOG_STATE_IOERROR)); iclog->ic_state == XLOG_STATE_IOERROR);
} }
} }
@ -3539,7 +3468,7 @@ xfs_log_ticket_put(
{ {
ASSERT(atomic_read(&ticket->t_ref) > 0); ASSERT(atomic_read(&ticket->t_ref) > 0);
if (atomic_dec_and_test(&ticket->t_ref)) if (atomic_dec_and_test(&ticket->t_ref))
kmem_zone_free(xfs_log_ticket_zone, ticket); kmem_cache_free(xfs_log_ticket_zone, ticket);
} }
xlog_ticket_t * xlog_ticket_t *
@ -3895,7 +3824,7 @@ xlog_state_ioerror(
xlog_in_core_t *iclog, *ic; xlog_in_core_t *iclog, *ic;
iclog = log->l_iclog; iclog = log->l_iclog;
if (! (iclog->ic_state & XLOG_STATE_IOERROR)) { if (iclog->ic_state != XLOG_STATE_IOERROR) {
/* /*
* Mark all the incore logs IOERROR. * Mark all the incore logs IOERROR.
* From now on, no log flushes will result. * From now on, no log flushes will result.
@ -3955,7 +3884,7 @@ xfs_log_force_umount(
* Somebody could've already done the hard work for us. * Somebody could've already done the hard work for us.
* No need to get locks for this. * No need to get locks for this.
*/ */
if (logerror && log->l_iclog->ic_state & XLOG_STATE_IOERROR) { if (logerror && log->l_iclog->ic_state == XLOG_STATE_IOERROR) {
ASSERT(XLOG_FORCED_SHUTDOWN(log)); ASSERT(XLOG_FORCED_SHUTDOWN(log));
return 1; return 1;
} }
@ -4006,21 +3935,8 @@ xfs_log_force_umount(
spin_lock(&log->l_cilp->xc_push_lock); spin_lock(&log->l_cilp->xc_push_lock);
wake_up_all(&log->l_cilp->xc_commit_wait); wake_up_all(&log->l_cilp->xc_commit_wait);
spin_unlock(&log->l_cilp->xc_push_lock); spin_unlock(&log->l_cilp->xc_push_lock);
xlog_state_do_callback(log, true, NULL); xlog_state_do_callback(log, true);
#ifdef XFSERRORDEBUG
{
xlog_in_core_t *iclog;
spin_lock(&log->l_icloglock);
iclog = log->l_iclog;
do {
ASSERT(iclog->ic_callback == 0);
iclog = iclog->ic_next;
} while (iclog != log->l_iclog);
spin_unlock(&log->l_icloglock);
}
#endif
/* return non-zero if log IOERROR transition had already happened */ /* return non-zero if log IOERROR transition had already happened */
return retval; return retval;
} }

View File

@ -179,7 +179,7 @@ xlog_cil_alloc_shadow_bufs(
/* /*
* We free and allocate here as a realloc would copy * We free and allocate here as a realloc would copy
* unecessary data. We don't use kmem_zalloc() for the * unnecessary data. We don't use kmem_zalloc() for the
* same reason - we don't need to zero the data area in * same reason - we don't need to zero the data area in
* the buffer, only the log vector header and the iovec * the buffer, only the log vector header and the iovec
* storage. * storage.
@ -682,7 +682,7 @@ xlog_cil_push(
} }
/* check for a previously pushed seqeunce */ /* check for a previously pushed sequence */
if (push_seq < cil->xc_ctx->sequence) { if (push_seq < cil->xc_ctx->sequence) {
spin_unlock(&cil->xc_push_lock); spin_unlock(&cil->xc_push_lock);
goto out_skip; goto out_skip;
@ -847,7 +847,7 @@ restart:
goto out_abort; goto out_abort;
spin_lock(&commit_iclog->ic_callback_lock); spin_lock(&commit_iclog->ic_callback_lock);
if (commit_iclog->ic_state & XLOG_STATE_IOERROR) { if (commit_iclog->ic_state == XLOG_STATE_IOERROR) {
spin_unlock(&commit_iclog->ic_callback_lock); spin_unlock(&commit_iclog->ic_callback_lock);
goto out_abort; goto out_abort;
} }

View File

@ -40,17 +40,15 @@ static inline uint xlog_get_client_id(__be32 i)
/* /*
* In core log state * In core log state
*/ */
#define XLOG_STATE_ACTIVE 0x0001 /* Current IC log being written to */ enum xlog_iclog_state {
#define XLOG_STATE_WANT_SYNC 0x0002 /* Want to sync this iclog; no more writes */ XLOG_STATE_ACTIVE, /* Current IC log being written to */
#define XLOG_STATE_SYNCING 0x0004 /* This IC log is syncing */ XLOG_STATE_WANT_SYNC, /* Want to sync this iclog; no more writes */
#define XLOG_STATE_DONE_SYNC 0x0008 /* Done syncing to disk */ XLOG_STATE_SYNCING, /* This IC log is syncing */
#define XLOG_STATE_DO_CALLBACK \ XLOG_STATE_DONE_SYNC, /* Done syncing to disk */
0x0010 /* Process callback functions */ XLOG_STATE_CALLBACK, /* Callback functions now */
#define XLOG_STATE_CALLBACK 0x0020 /* Callback functions now */ XLOG_STATE_DIRTY, /* Dirty IC log, not ready for ACTIVE status */
#define XLOG_STATE_DIRTY 0x0040 /* Dirty IC log, not ready for ACTIVE status*/ XLOG_STATE_IOERROR, /* IO error happened in sync'ing log */
#define XLOG_STATE_IOERROR 0x0080 /* IO error happened in sync'ing log */ };
#define XLOG_STATE_ALL 0x7FFF /* All possible valid flags */
#define XLOG_STATE_NOTUSED 0x8000 /* This IC log not being used */
/* /*
* Flags to log ticket * Flags to log ticket
@ -179,8 +177,6 @@ typedef struct xlog_ticket {
* - ic_next is the pointer to the next iclog in the ring. * - ic_next is the pointer to the next iclog in the ring.
* - ic_log is a pointer back to the global log structure. * - ic_log is a pointer back to the global log structure.
* - ic_size is the full size of the log buffer, minus the cycle headers. * - ic_size is the full size of the log buffer, minus the cycle headers.
* - ic_io_size is the size of the currently pending log buffer write, which
* might be smaller than ic_size
* - ic_offset is the current number of bytes written to in this iclog. * - ic_offset is the current number of bytes written to in this iclog.
* - ic_refcnt is bumped when someone is writing to the log. * - ic_refcnt is bumped when someone is writing to the log.
* - ic_state is the state of the iclog. * - ic_state is the state of the iclog.
@ -205,9 +201,8 @@ typedef struct xlog_in_core {
struct xlog_in_core *ic_prev; struct xlog_in_core *ic_prev;
struct xlog *ic_log; struct xlog *ic_log;
u32 ic_size; u32 ic_size;
u32 ic_io_size;
u32 ic_offset; u32 ic_offset;
unsigned short ic_state; enum xlog_iclog_state ic_state;
char *ic_datap; /* pointer to iclog data */ char *ic_datap; /* pointer to iclog data */
/* Callback structures need their own cacheline */ /* Callback structures need their own cacheline */
@ -399,8 +394,6 @@ struct xlog {
/* The following field are used for debugging; need to hold icloglock */ /* The following field are used for debugging; need to hold icloglock */
#ifdef DEBUG #ifdef DEBUG
void *l_iclog_bak[XLOG_MAX_ICLOGS]; void *l_iclog_bak[XLOG_MAX_ICLOGS];
/* log record crc error injection factor */
uint32_t l_badcrc_factor;
#endif #endif
/* log recovery lsn tracking (for buffer submission */ /* log recovery lsn tracking (for buffer submission */
xfs_lsn_t l_recovery_lsn; xfs_lsn_t l_recovery_lsn;
@ -542,7 +535,11 @@ xlog_cil_force(struct xlog *log)
* by a spinlock. This matches the semantics of all the wait queues used in the * by a spinlock. This matches the semantics of all the wait queues used in the
* log code. * log code.
*/ */
static inline void xlog_wait(wait_queue_head_t *wq, spinlock_t *lock) static inline void
xlog_wait(
struct wait_queue_head *wq,
struct spinlock *lock)
__releases(lock)
{ {
DECLARE_WAITQUEUE(wait, current); DECLARE_WAITQUEUE(wait, current);

View File

@ -103,10 +103,9 @@ xlog_alloc_buffer(
* Pass log block 0 since we don't have an addr yet, buffer will be * Pass log block 0 since we don't have an addr yet, buffer will be
* verified on read. * verified on read.
*/ */
if (!xlog_verify_bno(log, 0, nbblks)) { if (XFS_IS_CORRUPT(log->l_mp, !xlog_verify_bno(log, 0, nbblks))) {
xfs_warn(log->l_mp, "Invalid block length (0x%x) for buffer", xfs_warn(log->l_mp, "Invalid block length (0x%x) for buffer",
nbblks); nbblks);
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_HIGH, log->l_mp);
return NULL; return NULL;
} }
@ -152,11 +151,10 @@ xlog_do_io(
{ {
int error; int error;
if (!xlog_verify_bno(log, blk_no, nbblks)) { if (XFS_IS_CORRUPT(log->l_mp, !xlog_verify_bno(log, blk_no, nbblks))) {
xfs_warn(log->l_mp, xfs_warn(log->l_mp,
"Invalid log block/length (0x%llx, 0x%x) for buffer", "Invalid log block/length (0x%llx, 0x%x) for buffer",
blk_no, nbblks); blk_no, nbblks);
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_HIGH, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -244,19 +242,17 @@ xlog_header_check_recover(
* (XLOG_FMT_UNKNOWN). This stops us from trying to recover * (XLOG_FMT_UNKNOWN). This stops us from trying to recover
* a dirty log created in IRIX. * a dirty log created in IRIX.
*/ */
if (unlikely(head->h_fmt != cpu_to_be32(XLOG_FMT))) { if (XFS_IS_CORRUPT(mp, head->h_fmt != cpu_to_be32(XLOG_FMT))) {
xfs_warn(mp, xfs_warn(mp,
"dirty log written in incompatible format - can't recover"); "dirty log written in incompatible format - can't recover");
xlog_header_check_dump(mp, head); xlog_header_check_dump(mp, head);
XFS_ERROR_REPORT("xlog_header_check_recover(1)",
XFS_ERRLEVEL_HIGH, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} else if (unlikely(!uuid_equal(&mp->m_sb.sb_uuid, &head->h_fs_uuid))) { }
if (XFS_IS_CORRUPT(mp, !uuid_equal(&mp->m_sb.sb_uuid,
&head->h_fs_uuid))) {
xfs_warn(mp, xfs_warn(mp,
"dirty log entry has mismatched uuid - can't recover"); "dirty log entry has mismatched uuid - can't recover");
xlog_header_check_dump(mp, head); xlog_header_check_dump(mp, head);
XFS_ERROR_REPORT("xlog_header_check_recover(2)",
XFS_ERRLEVEL_HIGH, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;
@ -279,11 +275,10 @@ xlog_header_check_mount(
* by IRIX and continue. * by IRIX and continue.
*/ */
xfs_warn(mp, "null uuid in log - IRIX style log"); xfs_warn(mp, "null uuid in log - IRIX style log");
} else if (unlikely(!uuid_equal(&mp->m_sb.sb_uuid, &head->h_fs_uuid))) { } else if (XFS_IS_CORRUPT(mp, !uuid_equal(&mp->m_sb.sb_uuid,
&head->h_fs_uuid))) {
xfs_warn(mp, "log has mismatched uuid - can't recover"); xfs_warn(mp, "log has mismatched uuid - can't recover");
xlog_header_check_dump(mp, head); xlog_header_check_dump(mp, head);
XFS_ERROR_REPORT("xlog_header_check_mount",
XFS_ERRLEVEL_HIGH, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
return 0; return 0;
@ -471,7 +466,7 @@ xlog_find_verify_log_record(
xfs_warn(log->l_mp, xfs_warn(log->l_mp,
"Log inconsistent (didn't find previous header)"); "Log inconsistent (didn't find previous header)");
ASSERT(0); ASSERT(0);
error = -EIO; error = -EFSCORRUPTED;
goto out; goto out;
} }
@ -1347,10 +1342,11 @@ xlog_find_tail(
error = xlog_rseek_logrec_hdr(log, *head_blk, *head_blk, 1, buffer, error = xlog_rseek_logrec_hdr(log, *head_blk, *head_blk, 1, buffer,
&rhead_blk, &rhead, &wrapped); &rhead_blk, &rhead, &wrapped);
if (error < 0) if (error < 0)
return error; goto done;
if (!error) { if (!error) {
xfs_warn(log->l_mp, "%s: couldn't find sync record", __func__); xfs_warn(log->l_mp, "%s: couldn't find sync record", __func__);
return -EIO; error = -EFSCORRUPTED;
goto done;
} }
*tail_blk = BLOCK_LSN(be64_to_cpu(rhead->h_tail_lsn)); *tail_blk = BLOCK_LSN(be64_to_cpu(rhead->h_tail_lsn));
@ -1699,11 +1695,10 @@ xlog_clear_stale_blocks(
* the distance from the beginning of the log to the * the distance from the beginning of the log to the
* tail. * tail.
*/ */
if (unlikely(head_block < tail_block || head_block >= log->l_logBBsize)) { if (XFS_IS_CORRUPT(log->l_mp,
XFS_ERROR_REPORT("xlog_clear_stale_blocks(1)", head_block < tail_block ||
XFS_ERRLEVEL_LOW, log->l_mp); head_block >= log->l_logBBsize))
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
tail_distance = tail_block + (log->l_logBBsize - head_block); tail_distance = tail_block + (log->l_logBBsize - head_block);
} else { } else {
/* /*
@ -1711,11 +1706,10 @@ xlog_clear_stale_blocks(
* so the distance from the head to the tail is just * so the distance from the head to the tail is just
* the tail block minus the head block. * the tail block minus the head block.
*/ */
if (unlikely(head_block >= tail_block || head_cycle != (tail_cycle + 1))){ if (XFS_IS_CORRUPT(log->l_mp,
XFS_ERROR_REPORT("xlog_clear_stale_blocks(2)", head_block >= tail_block ||
XFS_ERRLEVEL_LOW, log->l_mp); head_cycle != tail_cycle + 1))
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
tail_distance = tail_block - head_block; tail_distance = tail_block - head_block;
} }
@ -2135,13 +2129,11 @@ xlog_recover_do_inode_buffer(
*/ */
logged_nextp = item->ri_buf[item_index].i_addr + logged_nextp = item->ri_buf[item_index].i_addr +
next_unlinked_offset - reg_buf_offset; next_unlinked_offset - reg_buf_offset;
if (unlikely(*logged_nextp == 0)) { if (XFS_IS_CORRUPT(mp, *logged_nextp == 0)) {
xfs_alert(mp, xfs_alert(mp,
"Bad inode buffer log record (ptr = "PTR_FMT", bp = "PTR_FMT"). " "Bad inode buffer log record (ptr = "PTR_FMT", bp = "PTR_FMT"). "
"Trying to replay bad (0) inode di_next_unlinked field.", "Trying to replay bad (0) inode di_next_unlinked field.",
item, bp); item, bp);
XFS_ERROR_REPORT("xlog_recover_do_inode_buf",
XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -2576,6 +2568,7 @@ xlog_recover_do_reg_buffer(
int bit; int bit;
int nbits; int nbits;
xfs_failaddr_t fa; xfs_failaddr_t fa;
const size_t size_disk_dquot = sizeof(struct xfs_disk_dquot);
trace_xfs_log_recover_buf_reg_buf(mp->m_log, buf_f); trace_xfs_log_recover_buf_reg_buf(mp->m_log, buf_f);
@ -2618,7 +2611,7 @@ xlog_recover_do_reg_buffer(
"XFS: NULL dquot in %s.", __func__); "XFS: NULL dquot in %s.", __func__);
goto next; goto next;
} }
if (item->ri_buf[i].i_len < sizeof(xfs_disk_dquot_t)) { if (item->ri_buf[i].i_len < size_disk_dquot) {
xfs_alert(mp, xfs_alert(mp,
"XFS: dquot too small (%d) in %s.", "XFS: dquot too small (%d) in %s.",
item->ri_buf[i].i_len, __func__); item->ri_buf[i].i_len, __func__);
@ -2969,22 +2962,18 @@ xlog_recover_inode_pass2(
* Make sure the place we're flushing out to really looks * Make sure the place we're flushing out to really looks
* like an inode! * like an inode!
*/ */
if (unlikely(!xfs_verify_magic16(bp, dip->di_magic))) { if (XFS_IS_CORRUPT(mp, !xfs_verify_magic16(bp, dip->di_magic))) {
xfs_alert(mp, xfs_alert(mp,
"%s: Bad inode magic number, dip = "PTR_FMT", dino bp = "PTR_FMT", ino = %Ld", "%s: Bad inode magic number, dip = "PTR_FMT", dino bp = "PTR_FMT", ino = %Ld",
__func__, dip, bp, in_f->ilf_ino); __func__, dip, bp, in_f->ilf_ino);
XFS_ERROR_REPORT("xlog_recover_inode_pass2(1)",
XFS_ERRLEVEL_LOW, mp);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
goto out_release; goto out_release;
} }
ldip = item->ri_buf[1].i_addr; ldip = item->ri_buf[1].i_addr;
if (unlikely(ldip->di_magic != XFS_DINODE_MAGIC)) { if (XFS_IS_CORRUPT(mp, ldip->di_magic != XFS_DINODE_MAGIC)) {
xfs_alert(mp, xfs_alert(mp,
"%s: Bad inode log record, rec ptr "PTR_FMT", ino %Ld", "%s: Bad inode log record, rec ptr "PTR_FMT", ino %Ld",
__func__, item, in_f->ilf_ino); __func__, item, in_f->ilf_ino);
XFS_ERROR_REPORT("xlog_recover_inode_pass2(2)",
XFS_ERRLEVEL_LOW, mp);
error = -EFSCORRUPTED; error = -EFSCORRUPTED;
goto out_release; goto out_release;
} }
@ -3166,7 +3155,7 @@ xlog_recover_inode_pass2(
default: default:
xfs_warn(log->l_mp, "%s: Invalid flag", __func__); xfs_warn(log->l_mp, "%s: Invalid flag", __func__);
ASSERT(0); ASSERT(0);
error = -EIO; error = -EFSCORRUPTED;
goto out_release; goto out_release;
} }
} }
@ -3247,12 +3236,12 @@ xlog_recover_dquot_pass2(
recddq = item->ri_buf[1].i_addr; recddq = item->ri_buf[1].i_addr;
if (recddq == NULL) { if (recddq == NULL) {
xfs_alert(log->l_mp, "NULL dquot in %s.", __func__); xfs_alert(log->l_mp, "NULL dquot in %s.", __func__);
return -EIO; return -EFSCORRUPTED;
} }
if (item->ri_buf[1].i_len < sizeof(xfs_disk_dquot_t)) { if (item->ri_buf[1].i_len < sizeof(struct xfs_disk_dquot)) {
xfs_alert(log->l_mp, "dquot too small (%d) in %s.", xfs_alert(log->l_mp, "dquot too small (%d) in %s.",
item->ri_buf[1].i_len, __func__); item->ri_buf[1].i_len, __func__);
return -EIO; return -EFSCORRUPTED;
} }
/* /*
@ -3279,7 +3268,7 @@ xlog_recover_dquot_pass2(
if (fa) { if (fa) {
xfs_alert(mp, "corrupt dquot ID 0x%x in log at %pS", xfs_alert(mp, "corrupt dquot ID 0x%x in log at %pS",
dq_f->qlf_id, fa); dq_f->qlf_id, fa);
return -EIO; return -EFSCORRUPTED;
} }
ASSERT(dq_f->qlf_len == 1); ASSERT(dq_f->qlf_len == 1);
@ -3537,6 +3526,7 @@ xfs_cui_copy_format(
memcpy(dst_cui_fmt, src_cui_fmt, len); memcpy(dst_cui_fmt, src_cui_fmt, len);
return 0; return 0;
} }
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -3601,8 +3591,10 @@ xlog_recover_cud_pass2(
struct xfs_ail *ailp = log->l_ailp; struct xfs_ail *ailp = log->l_ailp;
cud_formatp = item->ri_buf[0].i_addr; cud_formatp = item->ri_buf[0].i_addr;
if (item->ri_buf[0].i_len != sizeof(struct xfs_cud_log_format)) if (item->ri_buf[0].i_len != sizeof(struct xfs_cud_log_format)) {
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
cui_id = cud_formatp->cud_cui_id; cui_id = cud_formatp->cud_cui_id;
/* /*
@ -3654,6 +3646,7 @@ xfs_bui_copy_format(
memcpy(dst_bui_fmt, src_bui_fmt, len); memcpy(dst_bui_fmt, src_bui_fmt, len);
return 0; return 0;
} }
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, NULL);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
@ -3677,8 +3670,10 @@ xlog_recover_bui_pass2(
bui_formatp = item->ri_buf[0].i_addr; bui_formatp = item->ri_buf[0].i_addr;
if (bui_formatp->bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) if (bui_formatp->bui_nextents != XFS_BUI_MAX_FAST_EXTENTS) {
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
buip = xfs_bui_init(mp); buip = xfs_bui_init(mp);
error = xfs_bui_copy_format(&item->ri_buf[0], &buip->bui_format); error = xfs_bui_copy_format(&item->ri_buf[0], &buip->bui_format);
if (error) { if (error) {
@ -3720,8 +3715,10 @@ xlog_recover_bud_pass2(
struct xfs_ail *ailp = log->l_ailp; struct xfs_ail *ailp = log->l_ailp;
bud_formatp = item->ri_buf[0].i_addr; bud_formatp = item->ri_buf[0].i_addr;
if (item->ri_buf[0].i_len != sizeof(struct xfs_bud_log_format)) if (item->ri_buf[0].i_len != sizeof(struct xfs_bud_log_format)) {
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
bui_id = bud_formatp->bud_bui_id; bui_id = bud_formatp->bud_bui_id;
/* /*
@ -4018,7 +4015,7 @@ xlog_recover_commit_pass1(
xfs_warn(log->l_mp, "%s: invalid item type (%d)", xfs_warn(log->l_mp, "%s: invalid item type (%d)",
__func__, ITEM_TYPE(item)); __func__, ITEM_TYPE(item));
ASSERT(0); ASSERT(0);
return -EIO; return -EFSCORRUPTED;
} }
} }
@ -4066,7 +4063,7 @@ xlog_recover_commit_pass2(
xfs_warn(log->l_mp, "%s: invalid item type (%d)", xfs_warn(log->l_mp, "%s: invalid item type (%d)",
__func__, ITEM_TYPE(item)); __func__, ITEM_TYPE(item));
ASSERT(0); ASSERT(0);
return -EIO; return -EFSCORRUPTED;
} }
} }
@ -4187,7 +4184,7 @@ xlog_recover_add_to_cont_trans(
ASSERT(len <= sizeof(struct xfs_trans_header)); ASSERT(len <= sizeof(struct xfs_trans_header));
if (len > sizeof(struct xfs_trans_header)) { if (len > sizeof(struct xfs_trans_header)) {
xfs_warn(log->l_mp, "%s: bad header length", __func__); xfs_warn(log->l_mp, "%s: bad header length", __func__);
return -EIO; return -EFSCORRUPTED;
} }
xlog_recover_add_item(&trans->r_itemq); xlog_recover_add_item(&trans->r_itemq);
@ -4243,13 +4240,13 @@ xlog_recover_add_to_trans(
xfs_warn(log->l_mp, "%s: bad header magic number", xfs_warn(log->l_mp, "%s: bad header magic number",
__func__); __func__);
ASSERT(0); ASSERT(0);
return -EIO; return -EFSCORRUPTED;
} }
if (len > sizeof(struct xfs_trans_header)) { if (len > sizeof(struct xfs_trans_header)) {
xfs_warn(log->l_mp, "%s: bad header length", __func__); xfs_warn(log->l_mp, "%s: bad header length", __func__);
ASSERT(0); ASSERT(0);
return -EIO; return -EFSCORRUPTED;
} }
/* /*
@ -4285,7 +4282,7 @@ xlog_recover_add_to_trans(
in_f->ilf_size); in_f->ilf_size);
ASSERT(0); ASSERT(0);
kmem_free(ptr); kmem_free(ptr);
return -EIO; return -EFSCORRUPTED;
} }
item->ri_total = in_f->ilf_size; item->ri_total = in_f->ilf_size;
@ -4293,7 +4290,16 @@ xlog_recover_add_to_trans(
kmem_zalloc(item->ri_total * sizeof(xfs_log_iovec_t), kmem_zalloc(item->ri_total * sizeof(xfs_log_iovec_t),
0); 0);
} }
ASSERT(item->ri_total > item->ri_cnt);
if (item->ri_total <= item->ri_cnt) {
xfs_warn(log->l_mp,
"log item region count (%d) overflowed size (%d)",
item->ri_cnt, item->ri_total);
ASSERT(0);
kmem_free(ptr);
return -EFSCORRUPTED;
}
/* Description region is ri_buf[0] */ /* Description region is ri_buf[0] */
item->ri_buf[item->ri_cnt].i_addr = ptr; item->ri_buf[item->ri_cnt].i_addr = ptr;
item->ri_buf[item->ri_cnt].i_len = len; item->ri_buf[item->ri_cnt].i_len = len;
@ -4380,7 +4386,7 @@ xlog_recovery_process_trans(
default: default:
xfs_warn(log->l_mp, "%s: bad flag 0x%x", __func__, flags); xfs_warn(log->l_mp, "%s: bad flag 0x%x", __func__, flags);
ASSERT(0); ASSERT(0);
error = -EIO; error = -EFSCORRUPTED;
break; break;
} }
if (error || freeit) if (error || freeit)
@ -4460,7 +4466,7 @@ xlog_recover_process_ophdr(
xfs_warn(log->l_mp, "%s: bad clientid 0x%x", xfs_warn(log->l_mp, "%s: bad clientid 0x%x",
__func__, ohead->oh_clientid); __func__, ohead->oh_clientid);
ASSERT(0); ASSERT(0);
return -EIO; return -EFSCORRUPTED;
} }
/* /*
@ -4470,7 +4476,7 @@ xlog_recover_process_ophdr(
if (dp + len > end) { if (dp + len > end) {
xfs_warn(log->l_mp, "%s: bad length 0x%x", __func__, len); xfs_warn(log->l_mp, "%s: bad length 0x%x", __func__, len);
WARN_ON(1); WARN_ON(1);
return -EIO; return -EFSCORRUPTED;
} }
trans = xlog_recover_ophdr_to_trans(rhash, rhead, ohead); trans = xlog_recover_ophdr_to_trans(rhash, rhead, ohead);
@ -5172,8 +5178,10 @@ xlog_recover_process(
* If the filesystem is CRC enabled, this mismatch becomes a * If the filesystem is CRC enabled, this mismatch becomes a
* fatal log corruption failure. * fatal log corruption failure.
*/ */
if (xfs_sb_version_hascrc(&log->l_mp->m_sb)) if (xfs_sb_version_hascrc(&log->l_mp->m_sb)) {
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
} }
xlog_unpack_data(rhead, dp, log); xlog_unpack_data(rhead, dp, log);
@ -5190,31 +5198,25 @@ xlog_valid_rec_header(
{ {
int hlen; int hlen;
if (unlikely(rhead->h_magicno != cpu_to_be32(XLOG_HEADER_MAGIC_NUM))) { if (XFS_IS_CORRUPT(log->l_mp,
XFS_ERROR_REPORT("xlog_valid_rec_header(1)", rhead->h_magicno != cpu_to_be32(XLOG_HEADER_MAGIC_NUM)))
XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} if (XFS_IS_CORRUPT(log->l_mp,
if (unlikely( (!rhead->h_version ||
(!rhead->h_version || (be32_to_cpu(rhead->h_version) &
(be32_to_cpu(rhead->h_version) & (~XLOG_VERSION_OKBITS))))) { (~XLOG_VERSION_OKBITS))))) {
xfs_warn(log->l_mp, "%s: unrecognised log version (%d).", xfs_warn(log->l_mp, "%s: unrecognised log version (%d).",
__func__, be32_to_cpu(rhead->h_version)); __func__, be32_to_cpu(rhead->h_version));
return -EIO; return -EFSCORRUPTED;
} }
/* LR body must have data or it wouldn't have been written */ /* LR body must have data or it wouldn't have been written */
hlen = be32_to_cpu(rhead->h_len); hlen = be32_to_cpu(rhead->h_len);
if (unlikely( hlen <= 0 || hlen > INT_MAX )) { if (XFS_IS_CORRUPT(log->l_mp, hlen <= 0 || hlen > INT_MAX))
XFS_ERROR_REPORT("xlog_valid_rec_header(2)",
XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} if (XFS_IS_CORRUPT(log->l_mp,
if (unlikely( blkno > log->l_logBBsize || blkno > INT_MAX )) { blkno > log->l_logBBsize || blkno > INT_MAX))
XFS_ERROR_REPORT("xlog_valid_rec_header(3)",
XFS_ERRLEVEL_LOW, log->l_mp);
return -EFSCORRUPTED; return -EFSCORRUPTED;
}
return 0; return 0;
} }
@ -5296,8 +5298,12 @@ xlog_do_recovery_pass(
"invalid iclog size (%d bytes), using lsunit (%d bytes)", "invalid iclog size (%d bytes), using lsunit (%d bytes)",
h_size, log->l_mp->m_logbsize); h_size, log->l_mp->m_logbsize);
h_size = log->l_mp->m_logbsize; h_size = log->l_mp->m_logbsize;
} else } else {
return -EFSCORRUPTED; XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW,
log->l_mp);
error = -EFSCORRUPTED;
goto bread_err1;
}
} }
if ((be32_to_cpu(rhead->h_version) & XLOG_VERSION_2) && if ((be32_to_cpu(rhead->h_version) & XLOG_VERSION_2) &&

Some files were not shown because too many files have changed in this diff Show More