netbsd-bugs@netbsd.org
[Top] [All Lists]

Re: kern/39548: kernel debugging assertion "(vp->v_flag & VONWORKLST)" f

Subject: Re: kern/39548: kernel debugging assertion "(vp->v_flag & VONWORKLST)" failed
From: Simon Burge
Date: Thu, 18 Sep 2008 05:20:06 +0000 UTC
The following reply was made to PR kern/39548; it has been noted by GNATS.

From: Simon Burge <simonb@xxxxxxxxxx>
To: gnats-bugs@xxxxxxxxxx
Cc: kern-bug-people@xxxxxxxxxx, gnats-admin@xxxxxxxxxx,
    netbsd-bugs@xxxxxxxxxx, Simon Burge <simonb@xxxxxxxxxx>
Subject: Re: kern/39548: kernel debugging assertion "(vp->v_flag & VONWORKLST)" 
failed 
Date: Thu, 18 Sep 2008 15:16:19 +1000

 Antti Kantee wrote:
 
 >  Looks like a bad case of race condition between moving the vnode on
 >  and off the worklist and a conflict between the syncer and pagedaemon.
 >  I think you need to get memory exhausted and the pagedaemon interested
 >  in the situation and a case of very bad luck.
 >  
 >  Since this code has changed much for -current, I'd say a) sacrifice
 >  more chicken (poulet de bresse preferred) b) run without DEBUG or c)
 >  remove the KDASSERT and compile a new kernel.
 
 I might run with c) for now.  If that KDASSERT is subject to a race,
 should we just remove it on netbsd-4?  And is it still potentially a
 problem with -current too?  I guess some meditation might apply here...
 
 >  But, you could still print the vnode and append to the PR.
 
 (gdb) print *(struct vnode *)0xd4c3d938
 $3 = {v_uobj = {vmobjlock = {lock_data = 1, 
       lock_file = 0xc08ff85b "./sys/uvm/uvm_pdaemon.c", 
       unlock_file = 0xc095eb88 "./sys/miscfs/genfs/genfs_vnops.c", 
       lock_line = 406, unlock_line = 1464, list = {tqe_next = 0x0, 
         tqe_prev = 0xc09a6f70}, lock_holder = 0}, pgops = 0xc09a484c, memq = {
       tqh_first = 0xc30c0a68, tqh_last = 0xc216c938}, uo_npages = 191, 
     uo_refs = 1}, v_size = 367638528, v_flag = 65664, v_numoutput = 0, 
   v_writecount = 0, v_holdcnt = 4, v_mount = 0xc37c7000, v_op = 0xc39d4000, 
   v_freelist = {tqe_next = 0xda0aa590, tqe_prev = 0xde71d6dc}, v_mntvnodes = {
     tqe_next = 0xd2010d04, tqe_prev = 0xdfb2c33c}, v_cleanblkhd = {
     lh_first = 0xd3ea85dc}, v_dirtyblkhd = {lh_first = 0x0}, 
   v_synclist_slot = 31, v_synclist = {tqe_next = 0xdd5be3b0, 
     tqe_prev = 0xc3753ef8}, v_dnclist = {lh_first = 0x0}, v_nclist = {
     lh_first = 0x0}, v_un = {vu_mountedhere = 0xe9540408, 
     vu_socket = 0xe9540408, vu_specinfo = 0xe9540408, 
     vu_fifoinfo = 0xe9540408, vu_ractx = 0xe9540408}, v_lease = 0x0, 
   v_type = VREG, v_tag = VT_UFS, v_lock = {lk_interlock = {lock_data = 0, 
       lock_file = 0xc0901702 "./sys/kern/kern_lock.c", 
       unlock_file = 0xc0901702 "./sys/kern/kern_lock.c", lock_line = 568, 
       unlock_line = 920, list = {tqe_next = 0x0, tqe_prev = 0x0}, 
       lock_holder = 4294967295}, lk_flags = 0, lk_sharecount = 0, 
     lk_exclusivecount = 0, lk_recurselevel = 0, lk_waitcount = 0, 
     lk_wmesg = 0xc0904d6e "vnlock", lk_un = {lk_un_sleep = {
         lk_sleep_lockholder = -1, lk_sleep_locklwp = 0, lk_sleep_prio = 20, 
         lk_sleep_timo = 0, lk_newlock = 0x0}, lk_un_spin = {
         lk_spin_cpu = 4294967295, lk_spin_list = {tqe_next = 0x0, 
           tqe_prev = 0x14}}}, 
     lk_lock_file = 0xc095eb88 "./sys/miscfs/genfs/genfs_vnops.c", 
     lk_unlock_file = 0xc095eb88 "./sys/miscfs/genfs/genfs_vnops.c", 
     lk_lock_line = 309, lk_unlock_line = 325}, v_vnlock = 0xd4c3d9c4, 
   v_data = 0xd977978c, v_klist = {slh_first = 0x0}}
 
 Simon.
 

<Prev in Thread] Current Thread [Next in Thread>