-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: requested zero bytes from [...] (skip=65536) #45
Comments
Which version of dupd are you running? Looking at the source, that particular message is probably from 1.6 or earlier. I suggest first try with the latest released version (1.7.1). In general though, such an error might be caused by the file contents/size changing while the scan is running. |
Just compiled from git master branch. Here's the stack at the point where
the error message is about to print.
Thread 3 "dupd" hit Breakpoint 2, read_entry_bytes (entry=0x7fffd894a642,
filesize=4763056, path=0x7ffff780eed0
"/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive",
output=0x7ffd2c5b3090 "arch?aut\341", <incomplete sequence \371>,
bytes=0, skip=65536, bytes_read=0x7ffff780edd8) at src/utils.c:196
196 printf("error: requested zero bytes from [%s] (skip=%" PRIu64 ")\n",
(gdb) where
#0 read_entry_bytes (entry=0x7fffd894a642, filesize=4763056,
path=0x7ffff780eed0
"/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive",
output=0x7ffd2c5b3090 "arch?aut\341", <incomplete sequence \371>,
bytes=0, skip=65536, bytes_read=0x7ffff780edd8) at src/utils.c:196
#1 0x00005555555732f2 in fill_data_block (head=0x7fffd894a5c4,
entry=0x7fffd894a642, path=0x7ffff780eed0
"/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive")
at src/sizelist.c:438
#2 0x0000555555573d17 in read_list_reader (arg=0x7fffffffdac0) at
src/sizelist.c:605
#3 0x00007ffff7b65609 in start_thread (arg=<optimized out>) at
pthread_create.c:477
#4 0x00007ffff7a8a133 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb)
This is after Processed 117207/161500 files, so log files and output are
crazy big, but here's the last few lines of output (with -v -v -v):
Processed 117204/161500 (0 duplicates of size 27697584)
Processed 117205/161500 (0 duplicates of size 9976956)
Processed 117206/161500 (0 duplicates of size 12763568)
Processed 117207/161500 (0 duplicates of size 18555312)
and trace file:
1073930 A RB 65536 275828834
1073930 A RB 65536 275894370
1073931 A RB 65536 275959906
1073932 A RB 65536 276025442
…On Wed, Dec 21, 2022 at 11:46 AM Jyri J. Virkki ***@***.***> wrote:
Which version of dupd are you running? Looking at the source, that
particular message is probably from 1.6 or earlier.
I suggest first try with the latest released version (1.7.1).
In general though, such an error might be caused by the file contents/size
changing while the scan is running.
—
Reply to this email directly, view it on GitHub
<#45 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWAPKDDFCUB4SAU332WWTWONNARANCNFSM6AAAAAATF6SHQA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Thanks, I see. The development version (master branch) might be in an unstable state as it is work in progress, although I'm not aware of specific bugs in it so this is helpful. You could still try the released version (1.7.1) to see if it hits the same issue or not. Seems like the file size and block layout on disk is just so that it triggers a bug in the next block size computation. A few debugging commands may give more useful info: Disk layout for the file as dupd sees it:
There's also an option to print more debug output only for a given size to avoid overwhelming amount of output:
(Those seem to be the size and path of the file causing the crash from the stack trace, but change accordingly if not.) |
Same issue in branch 1.7_maintenance. Is that the same as the 1.7.1
release?
…On Wed, Dec 21, 2022 at 10:44 PM Jyri J. Virkki ***@***.***> wrote:
Thanks, I see.
The development version (master branch) might be in an unstable state as
it is work in progress, although I'm not aware of specific bugs in it so
this is helpful. You could still try the released version (1.7.1) to see if
it hits the same issue or not.
Seems like the file size and block layout on disk is just so that it
triggers a bug in the next block size computation.
A few debugging commands may give more useful info:
Disk layout for the file as dupd sees it:
dupd info --x-extents
/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive
There's also an option to print more debug output only for a given size to
avoid overwhelming amount of output:
dupd scan --debug-size 4763056
(Those seem to be the size and path of the file causing the crash from the
stack trace, but change accordingly if not.)
—
Reply to this email directly, view it on GitHub
<#45 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWAPOSRTGFNGB56V6EJIDWOP2GPANCNFSM6AAAAAATF6SHQA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
With the debug-size flag, I see the following. Perhaps something leaps out
at you from it, but it seems a bit insufficient to me.
...
Files scanned: 495000
add_file: SCAN_SIZE_UNKNOWN resolved to 4763056 for
[/mnt/4t/w530-brm/Desktop/admin/wirecopy/usr/lib/locale/locale-archive]
new_node: size tree node created for size 4763056 by file [locale-archive]
Files scanned: 500000
...
...
Files scanned: 1555000
add_file: SCAN_SIZE_UNKNOWN resolved to 4763056 for
[/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive]
----- dump path block list for size 4763056 -----
AFTER insert_first_path
head: 0x7fffd894a5c4
last_elem: 0x7fffd894a5e4
list_size: 1
wanted_bufsize: 0
buffer_ready: 0
state: PLS_NEED_DATA
hash_passes: 0
have_cached_hashes: 1
sizelist back ptr: (nil)
first_elem: 0x7fffd894a5e4
--entry 1
file state: FS_NEED_DATA
filename_size: 14
dir: 0x7ffff4c31ecf
fd: 0
next: (nil)
buffer: (nil)
bufsize: 0
data_in_buffer: 0
file_pos: 0
next_read_byte: 0
next_buffer_pos: 0
next_read_block: 0
blocks: (nil)
hash_ctx: (nil)
filename (direct read): [locale-archive]
built path:
[/mnt/4t/w530-brm/Desktop/admin/wirecopy/usr/lib/locale/locale-archive]
counted entries: 1
valid entries: 1
-----
…----- dump path block list for size 4763056 -----
AFTER insert_end_path
head: 0x7fffd894a5c4
last_elem: 0x7fffd894a642
list_size: 2
wanted_bufsize: 65536
buffer_ready: 0
state: PLS_NEED_DATA
hash_passes: 0
have_cached_hashes: 0
sizelist back ptr: 0x7fffe1267590
forward ptr back to me: 0x7fffd894a5c4
first_elem: 0x7fffd894a5e4
--entry 1
file state: FS_NEED_DATA
filename_size: 14
dir: 0x7ffff4c31ecf
fd: 0
next: 0x7fffd894a642
buffer: (nil)
bufsize: 0
data_in_buffer: 0
file_pos: 0
next_read_byte: 0
next_buffer_pos: 0
next_read_block: 0
blocks: 0x7fffe125e110
hash_ctx: (nil)
BLOCK LIST: count=4
[0] start_pos: 0 , len: 12288 , block: 4563150848
[1] start_pos: 16384 , len: 8192 , block: 4563150880
[2] start_pos: 65536 , len: 4128768 , block: 4563150976
[3] start_pos: 4194304 , len: 568752 , block: 4563470592
filename (direct read): [locale-archive]
built path:
[/mnt/4t/w530-brm/Desktop/admin/wirecopy/usr/lib/locale/locale-archive]
--entry 2
file state: FS_NEED_DATA
filename_size: 14
dir: 0x7ffff47e6f31
fd: 0
next: (nil)
buffer: (nil)
bufsize: 0
data_in_buffer: 0
file_pos: 0
next_read_byte: 0
next_buffer_pos: 0
next_read_block: 0
blocks: 0x7ffff0fab660
hash_ctx: (nil)
BLOCK LIST: count=4
[0] start_pos: 0 , len: 12288 , block: 331655168
[1] start_pos: 16384 , len: 8192 , block: 331655200
[2] start_pos: 65536 , len: 4128768 , block: 331655296
[3] start_pos: 4194304 , len: 568752 , block: 331711344
filename (direct read): [locale-archive]
built path:
[/mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive]
counted entries: 2
valid entries: 2
-----
Files scanned: 1560000
...
On Thu, Dec 22, 2022 at 1:10 PM Brian R. Murphy ***@***.***>
wrote:
Same issue in branch 1.7_maintenance. Is that the same as the 1.7.1
release?
On Wed, Dec 21, 2022 at 10:44 PM Jyri J. Virkki ***@***.***>
wrote:
> Thanks, I see.
>
> The development version (master branch) might be in an unstable state as
> it is work in progress, although I'm not aware of specific bugs in it so
> this is helpful. You could still try the released version (1.7.1) to see if
> it hits the same issue or not.
>
> Seems like the file size and block layout on disk is just so that it
> triggers a bug in the next block size computation.
>
> A few debugging commands may give more useful info:
>
> Disk layout for the file as dupd sees it:
>
> dupd info --x-extents
> /mnt/4t/w530-brm/Desktop/admin/oldcopy/usr/lib/locale/locale-archive
>
> There's also an option to print more debug output only for a given size
> to avoid overwhelming amount of output:
>
> dupd scan --debug-size 4763056
>
> (Those seem to be the size and path of the file causing the crash from
> the stack trace, but change accordingly if not.)
>
> —
> Reply to this email directly, view it on GitHub
> <#45 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AANWAPOSRTGFNGB56V6EJIDWOP2GPANCNFSM6AAAAAATF6SHQA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Please include output of
it should be same as in the size debug output, but good to confirm. |
I'm running on a pretty big set of files (4M+ files, about 2TB), and I get this error partway through.
Progress line looks like:
Sets : 117207/ 161500 65138564K ( 64302K/s) 0q 1%b 3075f 1013 s
error: requested zero bytes from [omitted_filename] (skip=65536)
The mentioned file exists, has data, and is readable. I am already trying to reproduce in gdb for more info, but suggestions/ideas could be useful.
The text was updated successfully, but these errors were encountered: