ufs-discuss@opensolaris.org
[Top] [All Lists]

Re: [ufs-discuss] snv_130 panic

Subject: Re: [ufs-discuss] snv_130 panic
From: "Frank Batschulat (Home)"
Date: Tue, 19 Jan 2010 16:15:54 +0100
Stoyan,

would it be possible for you to provide us with a location
where we can retrieve the core dump (if yes, send me a note off-list, directly)

I'd like to gather it, it may not have the same root cause as the ZFS bug 
6915516
Jim mentioned. I had a chat with Casper offline and the most interesting
question here is who did a bad crfree() - as such we should file a new bug
for this.

thanks
frankB


On Tue, 19 Jan 2010 15:31:45 +0100, Jim Rice <Jim.Rice@xxxxxxx> wrote:

> The stack looks like CR 6915516  panic in supgroupmember, snv_130
>
> see http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6915516
>
> I think UFS is innocent in this case.
>
> Regards,
>     Jim.
>
> Stoyan Angelov wrote:
>> hello all,
>>
>> several hours after upgrading my sun fire x4100 m2 machine to snv_130
>> it crashed with a panic that looks like the one described in bug 3428:
>> http://defect.opensolaris.org/bz/show_bug.cgi?id=3428
>>
>> in my case i do _not_ get the description saying: "occurred in
>> module "ufs" due to a NULL pointer dereference" and i am not sure if
>> this is the same bug.
>>
>> panic on snv_130, x64
>> =====================
>> panic[cpu1]/thread=ffffff02e3748780:
>> BAD TRAP: type=e (#pf Page fault) rp=ffffff000f9f5670
>> addr=ffffff03474fb77c
>>
>>
>> httpd:
>> #pf Page fault
>> Bad kernel fault at addr=0xffffff03474fb77c
>> pid=8175, pc=0xfffffffffb9c13a1, sp=0xffffff000f9f5760, eflags=0x10213
>> cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de>
>> cr2: ffffff03474fb77c
>> cr3: 223da6000
>> cr8: c
>>
>>         rdi:                0 rsi: ffffff02e0279490 rdx:                0
>>         rcx:         1ab83437  r8:         3570686f  r9:         1ab83437
>>         rax:         3570686f rbx:                1 rbp: ffffff000f9f5770
>>         r10: ffffff02dc6ee698 r11: ffffff000f9f5c41 r12: ffffff02e0279490
>>         r13: ffffff02d61d9cf8 r14:               40 r15: ffffff02e0654840
>>         fsb:                0 gsb: ffffff02d633b040  ds:               4b
>>          es:               4b  fs:                0  gs:              1c3
>>         trp:                e err:                0 rip: fffffffffb9c13a1
>>          cs:               30 rfl:            10213 rsp: ffffff000f9f5760
>>          ss:               38
>>
>> ffffff000f9f5550 unix:die+10f ()
>> ffffff000f9f5660 unix:trap+177e ()
>> ffffff000f9f5670 unix:cmntrap+e6 ()
>> ffffff000f9f5770 genunix:supgroupmember+39 ()
>> ffffff000f9f5790 genunix:groupmember+21 ()
>> ffffff000f9f57e0 ufs:ufs_iaccess+d3 ()
>> ffffff000f9f5870 ufs:ufs_lookup+c7 ()
>> ffffff000f9f5910 genunix:fop_lookup+ed ()
>> ffffff000f9f5b50 genunix:lookuppnvp+281 ()
>> ffffff000f9f5bf0 genunix:lookuppnatcred+11b ()
>> ffffff000f9f5ce0 genunix:lookupnameatcred+98 ()
>> ffffff000f9f5d70 genunix:lookupnameat+69 ()
>> ffffff000f9f5e00 genunix:cstatat_getvp+164 ()
>> ffffff000f9f5ea0 genunix:cstatat64_32+82 ()
>> ffffff000f9f5ec0 genunix:stat64_32+31 ()
>> ffffff000f9f5f10 unix:brand_sys_syscall32+19d ()
>>
>>
>> here is the full mdebug output against the kernel crash:
>> http://www.filibeto.org/~aduritz/misc/snv_130-crash/snv_130-mdebug-adv-20100119-02
>>
>>
>> here is the output from scat:
>> http://www.filibeto.org/~aduritz/misc/snv_130-crash/snv_130-scat-20100119-03
>>
>>
>>
>> any help on this issue will be appreciated.
>>
>>
>> greetings,
>>
>> Stoyan Angelov
>>
>>
>>
>> _______________________________________________
>> ufs-discuss mailing list
>> ufs-discuss@xxxxxxxxxxxxxxx
>
> _______________________________________________
> ufs-discuss mailing list
> ufs-discuss@xxxxxxxxxxxxxxx
> 



-- 
frankB

It is always possible to agglutinate multiple separate problems
into a single complex interdependent solution.
In most cases this is a bad idea.
_______________________________________________
ufs-discuss mailing list
ufs-discuss@xxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>