simple-scan[4467]: segfault at 18 ip 0000000000416a4a sp 00007fffd90e26f0 error 6 in simp le-scan[400000+3e000]
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| Simple Scan |
Opinion
|
Undecided
|
Unassigned |
Bug Description
Distribution: Debian Sid/unstable
Version: 3.4.2-1
Scanning in text mode worked without a problem. Then switching to Photo, the scan started but in the middle simple-scan crashed with a segfault. Here are the messages from `/var/log/syslog`.
Sep 10 19:21:31 myhostname simple-scan: sane_hpaio_cancel: already cancelled!
Sep 10 19:26:30 myhostname kernel: [ 2337.120372] simple-scan[4467]: segfault at 18 ip 0000000000416a4a sp 00007fffd90e26f0 error 6 in simple-
1. Please note that this issue only happened once and I could not reproduce it since then.
2. Unfortunately there is no debug package in Debian and my patch #687213 [1] has not been applied yet. There I was not able to use GDB to locate the source code line, where the crash happened. It should be at address 416a4a - 400000 = 16a4a. Maybe you can achieve that.
[1] http://
PS: For the record, these are the messages when starting Simple Scan after the crash and scanning more documents.
Sep 10 19:28:06 myhostname simple-scan: io/hpmud/jd.c 624: timeout read_channel sec=2 hp:/net/
Sep 10 19:28:07 myhostname simple-scan: invalid SclInquire(
Sep 10 19:28:07 myhostname simple-scan: exp:
Sep 10 19:28:07 myhostname simple-scan: [0000] 1B 2A 73 31 38 64 .*s18d
Sep 10 19:28:07 myhostname simple-scan: act:
Sep 10 19:28:07 myhostname simple-scan: [0000] 30 30 00
Sep 10 19:28:10 myhostname simple-scan: io/hpmud/jd.c 624: timeout read_channel sec=2 hp:/net/
Sep 10 19:28:16 myhostname simple-scan: invalid SclInquire(
Sep 10 19:28:16 myhostname simple-scan: exp:
Sep 10 19:28:16 myhostname simple-scan: [0000] 1B 2A 73 35 30 35 64 .*s505d
Sep 10 19:28:16 myhostname simple-scan: act:
Sep 10 19:28:16 myhostname simple-scan: [0000] 30 30 00
Sep 10 19:28:18 myhostname simple-scan: sane_hpaio_cancel: already cancelled!
Sep 10 19:32:22 myhostname simple-scan: sane_hpaio_cancel: already cancelled!
Paul Menzel (pm-debian) wrote : Re: [Bug 1053616] Re: simple-scan[4467]: segfault at 18 ip 0000000000416a4a sp 00007fffd90e26f0 error 6 in simp le-scan[400000+3e000] | #2 |
Dear Michael,
Am Donnerstag, den 01.11.2012, 07:24 +0000 schrieb Michael Nagel:
> thank you for your interest in Simple Scan and sorry that I did not
> reply earlier.
> Unfortunately we really lack the manpower to properly investigate all
> of the crashes with Simple Scan.
I see. Thank you for your reply!
> Part of the problem is, that a lot of the crashes don't happen in
> Simple Scan itself but in the sane backend (i.e. driver) used. I
> suspect this might be the case here as well.
According to the segmentation fault line in the Linux kernel log, the
crash happened in Simple Scan though. So Simple Scan should handle such
things gracefully I guess. But without more information, no fix can be
found. Maybe we are lucky and I will hit that bug again (with the
debugging package installed now).
> So my change of the status of this bug does not mean that there is no
> problem (I acknowledge there is).
> But I do not want to give the false impression / expectation that
> something is going to happen here.
Thank you for the clear answer. It makes sense and I understand.
> Feel free to update this with follow-up information and to file
> further tickets!
> Thanks again and I am sorry again!
No problem.
Michael Nagel (nailor) wrote : | #3 |
Thanks for your understanding.
You might possibly be interested in these two:
Big picture solution: Run scanner thread in separate process -- Simple Scan Bugs Bug #564357
Short Term Workaround: Simple Scan crashes result in preventable data loss -- Simple Scan Bugs Bug #897469
Paul Menzel (pm-debian) wrote : | #4 |
Thanks to installing debugging symbols with the patch attached to <http://
Core was generated by `simple-scan -d'.
Program terminated with signal 11, Segmentation fault.
#0 page_parse_line (size_changed=
569 _tmp24_[_tmp25_ + _tmp26_] = _tmp31_;
(gdb) l
564 _tmp28_ = _tmp27_->data;
565 _tmp28__length1 = _tmp27_
566 _tmp29_ = line_offset;
567 _tmp30_ = i;
568 _tmp31_ = _tmp28_[_tmp29_ + _tmp30_];
569 _tmp24_[_tmp25_ + _tmp26_] = _tmp31_;
570 _tmp32_ = _tmp24_[_tmp25_ + _tmp26_];
571 }
572 }
573 }
(gdb) p* _tmp24_
value has been optimized out
(gdb) p* size_changed
$1 = 0
(gdb) info regs
Undefined info command: "regs". Try "help info".
(gdb) info re
record registers
(gdb) info registers
rax 0x0 0
rbx 0x0 0
rcx 0x70 112
rdx 0x0 0
rsi 0x0 0
rdi 0x0 0
rbp 0x1f6f8e0 0x1f6f8e0
rsp 0x7fffae471990 0x7fffae471990
r8 0x7fffae471b60 140736117283680
r9 0x0 0
r10 0x7fffae471bd8 140736117283800
r11 0x7f20ebb5f9c6 139779370252742
r12 0x1f98380 33129344
r13 0x355 853
r14 0x0 0
r15 0x7fffae471b60 140736117283680
rip 0x414793 0x414793 <page_parse_
eflags 0x10246 [ PF ZF IF RF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
(gdb) p line_offset
$2 = 0
(gdb) p* line_offset
Cannot access memory at address 0x0
Paul Menzel (pm-debian) wrote : | #5 |
And here is the backtrace.
#0 page_parse_line (size_changed=
_tmp24_ = <optimized out>
_tmp28_ = 0x7f20e00b7970 "p"
_tmp31_ = 112 'p'
_tmp24__length1 = 0
_tmp28__length1 = 161
_tmp23_ = <optimized out>
_tmp27_ = 0x1f98380
_tmp19_ = <optimized out>
_tmp22_ = 0x1f98380
_tmp21_ = <optimized out>
_tmp18_ = 0
i = <optimized out>
_tmp1_ = <optimized out>
_tmp14_ = 0
offset = 0
_tmp17_ = 80
line_offset = 0
_vala_
_tmp2_ = 0
_tmp15_ = 0
_tmp33_ = <optimized out>
line_number = 853
_tmp0_ = 0x1f98380
_tmp13_ = 853
_tmp16_ = 0x1f98380
#1 page_parse_
_tmp1_ = <optimized out>
_tmp3_ = <optimized out>
_tmp5_ = <optimized out>
_tmp7_ = 0
_tmp4_ = 0x1f98380
_tmp6_ = 0x1f98380
_tmp8_ = 0
_tmp0_ = 0
i = <optimized out>
size_has_changed = <optimized out>
_tmp9_ = <optimized out>
__PRETTY_
#2 0x000000000041a161 in application_
at simple-scan.c:1181
_tmp3_ = 0x1f6f8e0
_tmp0_ = 0x1f6f8e0
_tmp2_ = 0
#3 _application_
No locals.
#4 0x00007f20ecc6c6e0 in g_closure_invoke (closure=0x19344e0, return_value=0x0, n_param_values=2, param_values=
invocation
marshal = 0x41f080 <g_cclosure_
marshal_data = 0x0
in_marshal = 0
real_closure = 0x19344c0
__PRETTY_
#5 0x00007f20ecc7d750 in signal_
emission_
at /tmp/buildd/
tmp = <optimized out>
handler = 0x1932a30
accumulator = 0x0
emission = {next = 0x0, instance = 0x18e2be0, ihint = {signal_id = 253, detail = 0, run_type = G_SIGNAL_
state = EMISSION_RUN, chain_type = 4}
class_closure = 0x0
hlist = 0x1932a30
handler_list = 0x1932a30
return_accu = 0x0
accu = {g_type = 0, data = {{v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0,
v_double = 0, v_pointer = 0x0}, {v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0,
v_double = 0, v_pointer = 0x0}}}
signal_id = 253
max_sequentia
return_
#6 0x00007f20ecc856bc in g_signal_
var_
instance_
signal_
param_values = 0x7fffae471bd8
node = 0x1932f30
i = <optimized out>
n_params = 1
__PRE...
Paul Menzel (pm-debian) wrote : | #6 |
The above two pastes are from the following segfault, so not the original reported one. :( Sorry.
[16967.131743] simple-scan[7879]: segfault at 0 ip 0000000000414793 sp 00007fffae471990 error 6 in simple-
Paul Menzel (pm-debian) wrote : | #7 |
Here is the backtrace from my original report. `self-priv = NULL` causes the segmentation fault. Has that been addressed in more recent versions?
Core was generated by `simple-scan'.
Program terminated with signal 11, Segmentation fault.
#0 page_view_
4089 self->priv-
(gdb) l
4084
4085
4086 static void page_view_
4087 g_return_if_fail (self != NULL);
4088 g_return_if_fail (p != NULL);
→ 4089 self->priv-
4090 g_signal_
4091 g_signal_
4092 }
4093
(gdb) bt f
#0 page_view_
No locals.
#1 _page_view_
No locals.
#2 0x00007ffb85d766e0 in g_closure_invoke (closure=0x2622200, return_value=0x0, n_param_values=1, param_values=
invocation
marshal = 0x40d470 <g_cclosure_
marshal_data = 0x0
in_marshal = 0
real_closure = 0x26221e0
__PRETTY_
#3 0x00007ffb85d87750 in signal_
emission_
at /tmp/buildd/
tmp = <optimized out>
handler = 0x222be70
accumulator = 0x0
emission = {next = 0x7fff47dbb0b0, instance = 0x265a4c0, ihint = {signal_id = 243, detail = 0, run_type = G_SIGNAL_
state = EMISSION_RUN, chain_type = 4}
class_closure = 0x0
hlist = 0x2658d00
handler_list = 0x2658d00
return_accu = 0x0
accu = {g_type = 0, data = {{v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0,
v_double = 0, v_pointer = 0x0}, {v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0,
v_double = 0, v_pointer = 0x0}}}
signal_id = 243
max_sequentia
return_
#4 0x00007ffb85d8f6bc in g_signal_
var_
instance_
signal_
param_values = 0x7fff47dbab18
node = 0x23c6760
i = <optimized out>
n_params = 0
__PRETTY_
#5 0x00007ffb85d8fd60 in g_signal_
detailed_
var_args = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 0x7fff47dbaed0, reg_save_area = 0x7fff47dbade0}}
detail = 0
signal_id = 243
_...
Michael Nagel (nailor) wrote : | #8 |
this should be the relevant c code: page-view.c:4086ff
static void page_view_
g_return_if_fail (self != NULL);
g_return_if_fail (p != NULL);
self->
g_signal_
g_signal_
}
corresponding to this vala code: page-view.
private void page_size_
{
/* Regenerate image */
changed ();
}
related is page-view.c:69ff
struct _PageView {
GTypeInstance parent_instance;
volatile int ref_count;
PageViewPrivate * priv;
};
self->priv->*** seems to be a common idiom in the generated c code, but this is where my knowledge of vala ends.
Hello Paul,
thank you for your interest in Simple Scan and sorry that I did not reply earlier.
Unfortunately we really lack the manpower to properly investigate all of the crashes with Simple Scan.
Part of the problem is, that a lot of the crashes don't happen in Simple Scan itself but in the sane backend (i.e. driver) used. I suspect this might be the case here as well.
So my change of the status of this bug does not mean that there is no problem (I acknowledge there is).
But I do not want to give the false impression / expectation that something is going to happen here.
Feel free to update this with follow-up information and to file further tickets!
Thanks again and I am sorry again!
Best Regards
Michael