issuekey,created,title,description,storypoints 118020102,2012-03-15 23:23:10.725,Multiple Targets (lp:#956560),"[Original report](https://bugs.launchpad.net/bugs/956560) created by **Thomas (th-h)** ``` I would consider it very usefull to improve duplicity in a way that it can handle multiple backup targets. E.G. if i have 500 Gigs of data to backup and i need to distribute the backup to 5 different Webdav Accounts it would be very nice if i could tell duplicity the upper limit for each account and than duplicity will automaticly distribute the slices over several hosts. In my example i would use 1 Gig Slice size and would than tell duplicity to write the first 100 slices to webdav account a and the second 100 slices to account b and so on... ```",18 118020086,2012-03-06 12:23:48.481,BackendException: sftp put of [] failed: [Errno 2] (lp:#947936),"[Original report](https://bugs.launchpad.net/bugs/947936) created by **Francis West (francis-badape)** ``` Hello, I have a strange bug with v0.6.18 I get the following error: Local and Remote metadata are synchronized, no sync needed. Warning, found incomplete backup sets, probably left from aborted session Last full backup left a partial set, restarting. Last full backup date: Tue Mar 6 11:42:05 2012 Reuse configured PASSPHRASE as SIGN_PASSPHRASE RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. Reading filelist /root/bin/filelist.txt Sorting filelist /root/bin/filelist.txt Local and Remote metadata are synchronized, no sync needed. Warning, found incomplete backup sets, probably left from aborted session Last full backup date: none Reuse configured PASSPHRASE as SIGN_PASSPHRASE No signatures found, switching to full backup. BackendException: sftp put of /var/tmp/duplicity-Y5cji0-tempdir/mktemp-0Cehis-2 (as duplicity- full.20120306T115112Z.vol1.difftar.gpg) failed: [Errno 2] yes indeed the file it is trying to access does not exist, it doesn't appear to affect all my systems i've tested duplicity 0.6.18 on, however the systems are lucid with the latest updates installed. * lots of comparing files * Comparing () and None Getting delta of (() /opt/zimbra/aspell-0.60.6/lib/aspell-0.60/fr-60-only.rws reg) and None A file.abc Removing still remembered temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-YUYbEk- tempdir/mktemp-WapA7f-1 Cleanup of temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-YUYbEk- tempdir/mktemp-WapA7f-1 failed Removing still remembered temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-aPa8gV- tempdir/mktemp-MeYX9j-1 Cleanup of temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-aPa8gV- tempdir/mktemp-MeYX9j-1 failed AsyncScheduler: running task synchronously (asynchronicity disabled) Removing still remembered temporary file /tmp/duplicity-Pa11IH- tempdir/mkstemp-tl22iM-1 Removing still remembered temporary file /tmp/duplicity-Pa11IH- tempdir/mktemp-R58OQe-2 Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1391, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1384, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1359, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 500, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 399, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 398, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/bin/duplicity"", line 296, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.6/dist-packages/duplicity/backends/sshbackend.py"", line 191, in put raise BackendException(""sftp put of %s (as %s) failed: %s"" % (source_path.name,remote_filename,e)) BackendException: sftp put of /tmp/duplicity-VOemDY-tempdir/mktemp-VGFNvm-2 (as duplicity-full.20120306T120940Z.vol1.difftar.gpg) failed: [Errno 2] BackendException: sftp put of /tmp/duplicity-VOemDY-tempdir/mktemp-VGFNvm-2 (as duplicity-full.20120306T120940Z.vol1.difftar.gpg) failed: [Errno 2] yes i agree you are going to have an error there because that file/dir doesn't exist the first few lines are: Using archive dir: /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15 Using backup name: 763e12cfbcae8ceca538e33fee48bf15 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.u1backend Succeeded Reading filelist /root/bin/filelist.txt Sorting filelist /root/bin/filelist.txt Main action: inc ================================================================================ duplicity 0.6.18 ($reldate) Args: /usr/bin/duplicity -v 9 --encrypt-key A1C86AEB --sign-key A1C86AEB --exclude-filelist /root/bin/filelist.txt / scp://user_stuff_snip@remote_host/remotedir Linux zimbra 2.6.32-37-server #81-Ubuntu SMP Fri Dec 2 20:49:12 UTC 2011 x86_64 /usr/bin/python2.6 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] ================================================================================ ================================================================================ Using temporary directory /tmp/duplicity-Pa11IH-tempdir Registering (mkstemp) temporary file /tmp/duplicity-Pa11IH-tempdir/mkstemp- tl22iM-1 Temp has 84592353280 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: SftpBackend Archive dir: /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Reuse configured PASSPHRASE as SIGN_PASSPHRASE No signatures found, switching to full backup. Using temporary directory /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-YUYbEk- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-YUYbEk- tempdir/mktemp-WapA7f-1 Using temporary directory /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-aPa8gV- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/763e12cfbcae8ceca538e33fee48bf15/duplicity-aPa8gV- tempdir/mktemp-MeYX9j-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-Pa11IH- tempdir/mktemp-R58OQe-2 Selecting / Comparing () and None Getting delta of (() / dir) and None A . Selecting /cdrom Comparing ('cdrom',) and None Getting delta of (('cdrom',) /cdrom dir) and None A cdrom Selecting /etc Comparing ('etc',) and None .... ```",22 118020059,2012-03-05 10:27:06.500,Frequent BackendException:s lately (SFTP) (lp:#946992),"[Original report](https://bugs.launchpad.net/bugs/946992) created by **Daniel Andersson (drandersson)** ``` I have had a backup script using duplicity running for a long time. Recently (I have noticed it since mid-January, perhaps related to the introduction of python-paramiko) it frequently (not always) fails with (using switches ""--asynchronous-upload -v5"" (I haven't gotten a ""-v9"" report since it doesn't always happen)): """""" AsyncScheduler: task execution done (success: False) ....[some files added to next volume]... AsyncScheduler: scheduling task for asynchronous execution AsyncScheduler: a previously scheduled task has failed; propagating the result immediately Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1388, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1381, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1351, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 500, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 399, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 151, in schedule_task return self.__run_asynchronously(fn, params) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 215, in __run_asynchronously with_lock(self.__cv, wait_for_and_register_launch) File ""/usr/lib/python2.7/dist-packages/duplicity/dup_threading.py"", line 100, in with_lock return fn() File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 196, in wait_for_and_register_launch check_pending_failure() # raise on fail File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 191, in check_pending_failure self.__failed_waiter() File ""/usr/lib/python2.7/dist-packages/duplicity/dup_threading.py"", line 201, in caller value = fn() File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 183, in (waiter, caller) = async_split(lambda: fn(*params)) File ""/usr/bin/duplicity"", line 398, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/bin/duplicity"", line 296, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/sshbackend.py"", line 189, in put raise BackendException(""sftp put of %s (as %s) failed: %s"" % (source_path.name,remote_filename,e)) BackendException: sftp put of /media/5-2000/tmp/duplicity-tmp/duplicity- CyAVNR- tempdir/mktemp-d28oCp-409 (as duplicity- full.20120207T111003Z.vol408.difftar.gpg) failed: Server connection dropped: """""" and to STDERR it is written """""" No handlers could be found for logger ""paramiko.transport"" BackendException: sftp put of /media/5-2000/tmp/duplicity-tmp/duplicity- CyAVNR- tempdir/mktemp-d28oCp-409 (as duplicity- full.20120207T111003Z.vol408.difftar.gpg) failed: Server connection dropped: """""" It sounds like a local connection error, but it has never happened before and the error mentions paramiko explicitly. I have recently started using "" --asynchronous-upload"", but I believe it happened before that as well (it is hard to track down since it seems to happen randomly). It happens more often during full backups, when 20GB+ is transferred (default volume size 25MB), than during incremental backups when only a few volumes typically are transferred. Perhaps it is a local connection error that always have existed, but the previous SFTP backend retried as default action instead of failing. Also reported as Debian bug #659007, but it has gotten no attention for a month. Duplicity 0.6.17 Python 2.7.2+ Debian Sid Linux ```",30 118020051,2012-02-20 07:29:36.301,sftp backend should properly report write failure when destination full (lp:#936770),"[Original report](https://bugs.launchpad.net/bugs/936770) created by **Olivier Berger (oberger)** ``` When the remote destination is full, the sftp backend will try writing the (usually big) file, then detect a failure, and report, but in a way that is not really obvious (see http://bugs.debian.org/cgi- bin/bugreport.cgi?bug=659008#10 for instance). I guess that before copying a big file to the destination, there may be a way to check for empty space there, and doing the calculation for potential problems... provided that no better diagnostic (like ""file system full"") can be otained at failure time, then maybe a warning about checking disk space could be issued, then. Of course, there may be simultaneous other programs changing the destination usage, so this would just be a hint for the user, if there's no such way as getting a proper ""file system full"" through paramiko and sftp ;-) Hope this helps. Thanks in advance. ```",6 118020046,2012-02-18 18:42:28.167,Option to skip existing files with restore action and --force (lp:#935644),"[Original report](https://bugs.launchpad.net/bugs/935644) created by **Christopher Foo (chris.foo)** ``` I decided to restore to another external hdd and left it overnight. I checked this morning and I got hit by bug LP#862485 and bug LP#749876. Attempting to restore again will give the usual  Restore destination directory [blahblah] already exists.  Will not overwrite. so I tried using the --force argument as well. Unfortunatly, that means it has to do restore everything again.. So, it would be nice if there was an option that proceeds with restoring into an existing directory but skips existing files in the destination. Duplicity version: rev 831 from duplicity-0.6-series + hax patch from LP#662442 Python version: Python 2.7.2+ OS Distro and version: ubuntu 11.10 Type of target filesystem: ntfs Command: PYTHONPATH=. bin/duplicity restore file:///media/CHFOO2/backups/backups-laptop /media/HP\ SimpleSave/chris/restore/ --verbosity info --force (I tried looking at the source code to see if I can write up a quick hax but it looks like theres too many deltrees() in the logic.. ) ```",6 118020043,2012-02-16 10:54:50.432,ssh backend does not work with folders containing 'open' (lp:#933388),"[Original report](https://bugs.launchpad.net/bugs/933388) created by **Peter Meier (peter-meier)** ``` I'm not able to use duplicity on folders containing the word open. It looks like the client is cutting of everything from open on: Does not work: sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' <- folder name is missing Works: sftp command: 'mkdir ""foo@example.com""' State = sftp, Before = 'mkdir ""foo@example.com"" <- folder name is here Does not work: sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' <- folder name is only partially present. It does also not work on a folder just called fooopenbar or simply open. Full debug runs attached. 1st does not work 2nd works 3rd does not work This started to appear after upgrading from 0.6.11 to 0.6.17. Other folders, not containing the word open, work fine. # ssh sndsite@10.x.x rm -rf /tmp/open@example.com # PASSPHRASE=""foobar"" duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/open@example.com -v9 Using archive dir: /root/.cache/duplicity/fc444e6ac9d9df55ca0cb73588b1a453 Using backup name: fc444e6ac9d9df55ca0cb73588b1a453 Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Failed: No module named py Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: cleanup ================================================================================ duplicity 0.6.17 (November 25, 2011) Args: /usr/bin/duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/open@example.com -v9 Linux foo.example.com 2.6.18-274.17.1.el5xen #1 SMP Tue Jan 10 18:06:37 EST 2012 x86_64 x86_64 /usr/bin/python 2.4.3 (#1, Sep 21 2011, 19:55:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] ================================================================================ Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""open@example.com""' sftp command: 'cd ""open@example.com""' State = sftp, Before = 'cd ""open@example.com""' sftp command: 'ls -1' State = sftp, Before = 'ls -1' State = sftp, Before = 'quit' finished sftp command: 'quit' Local and Remote metadata are synchronized, no sync needed. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""open@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #1) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #2) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""open@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #2) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #3) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""open@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #3) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #4) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""open@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #4) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #5) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""open@example.com""' State = sftp, Before = 'mkdir ""' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""open@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #5) Giving up trying to execute 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' after 5 attempts Using temporary directory /tmp/duplicity-yyyhfe-tempdir Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1377, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1370, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1250, in main globals.archive_dir).set_values() File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 673, in set_values backend_filename_list = self.backend.list() File ""/usr/lib64/python2.4/site- packages/duplicity/backends/sshbackend.py"", line 296, in list l = self.run_sftp_command(commandline, commands).split('\n')[1:] File ""/usr/lib64/python2.4/site- packages/duplicity/backends/sshbackend.py"", line 214, in run_sftp_command raise BackendException(""Error running '%s'"" % commandline) BackendException: Error running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' BackendException: Error running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' --------------------------------- # ssh sndsite@10.x.x.x rm -rf /tmp/foo@example.com # PASSPHRASE=""foo"" duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/foo@example.com -v9 Using archive dir: /root/.cache/duplicity/146f6826f86449fc656e5f1778130372 Using backup name: 146f6826f86449fc656e5f1778130372 Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Failed: No module named py Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: cleanup ================================================================================ duplicity 0.6.17 (November 25, 2011) Args: /usr/bin/duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/foo@example.com -v9 Linux foo.example.com 2.6.18-274.17.1.el5xen #1 SMP Tue Jan 10 18:06:37 EST 2012 x86_64 x86_64 /usr/bin/python 2.4.3 (#1, Sep 21 2011, 19:55:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] ================================================================================ Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foo@example.com""' State = sftp, Before = 'mkdir ""foo@example.com""' sftp command: 'cd ""foo@example.com""' State = sftp, Before = 'cd ""foo@example.com""' sftp command: 'ls -1' State = sftp, Before = 'ls -1' State = sftp, Before = 'quit' finished sftp command: 'quit' Local and Remote metadata are synchronized, no sync needed. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foo@example.com""' State = sftp, Before = 'mkdir ""foo@example.com"" Couldn't create directory: Failure' sftp command: 'cd ""foo@example.com""' State = sftp, Before = 'cd ""foo@example.com""' sftp command: 'ls -1' State = sftp, Before = 'ls -1' State = sftp, Before = 'quit' finished sftp command: 'quit' 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: SSHBackend Archive dir: /root/.cache/duplicity/146f6826f86449fc656e5f1778130372 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No extraneous files found, nothing deleted in cleanup. Using temporary directory /tmp/duplicity-5qFONA-tempdir --------- # ssh sndsite@10.x.x.x rm -rf /tmp/foopenbar@example.com # PASSPHRASE=""foo"" duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/foopenbar@example.com -v9 Using archive dir: /root/.cache/duplicity/3ffaa9979263a074b0e8a8869a91a549 Using backup name: 3ffaa9979263a074b0e8a8869a91a549 Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Failed: No module named py Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: cleanup ================================================================================ duplicity 0.6.17 (November 25, 2011) Args: /usr/bin/duplicity cleanup --extra-clean --force ssh://sndsite@10.x.x.x//tmp/foopenbar@example.com -v9 Linux foo.example.com 2.6.18-274.17.1.el5xen #1 SMP Tue Jan 10 18:06:37 EST 2012 x86_64 x86_64 /usr/bin/python 2.4.3 (#1, Sep 21 2011, 19:55:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] ================================================================================ Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""foopenbar@example.com""' sftp command: 'cd ""foopenbar@example.com""' State = sftp, Before = 'cd ""foopenbar@example.com""' sftp command: 'ls -1' State = sftp, Before = 'ls -1' State = sftp, Before = 'quit' finished sftp command: 'quit' Local and Remote metadata are synchronized, no sync needed. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #1) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""foopenbar@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #1) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #2) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""foopenbar@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #2) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #3) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""foopenbar@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #3) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #4) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""foopenbar@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #4) Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' (attempt #5) State = sftp, Before = 'Connecting to 10.x.x.x...' sftp command: 'mkdir ""/tmp""' State = sftp, Before = 'mkdir ""/tmp"" Couldn't create directory: Failure' sftp command: 'cd ""/tmp""' State = sftp, Before = 'cd ""/tmp""' sftp command: 'mkdir ""foopenbar@example.com""' State = sftp, Before = 'mkdir ""fo' Could not open file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' finished sftp command: 'mkdir ""foopenbar@example.com""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' failed (attempt #5) Giving up trying to execute 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' after 5 attempts Using temporary directory /tmp/duplicity-EjrBlY-tempdir Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1377, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1370, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1250, in main globals.archive_dir).set_values() File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 673, in set_values backend_filename_list = self.backend.list() File ""/usr/lib64/python2.4/site- packages/duplicity/backends/sshbackend.py"", line 296, in list l = self.run_sftp_command(commandline, commands).split('\n')[1:] File ""/usr/lib64/python2.4/site- packages/duplicity/backends/sshbackend.py"", line 214, in run_sftp_command raise BackendException(""Error running '%s'"" % commandline) BackendException: Error running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' BackendException: Error running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 sndsite@10.x.x.x' ```",6 118018839,2012-02-12 06:13:13.010,Inconsistent Verify: Invalid data - SHA1 hash mismatch : ErrorReturnCode=21 (lp:#930866),"[Original report](https://bugs.launchpad.net/bugs/930866) created by **Rodrigo Alvarez (rodrigo-alvarez-i)** ``` Short story: When I run duplicity in verify mode on a given set, it sometimes passes, and it sometimes fails with reporting: Invalid data - SHA1 hash mismatch for file: SOME_VOLUME.difftar.gpg Calculated hash: SOME_HASH Manifest hash: ANOTHER_HASH If I run it again, SOME_VOLUME gets through and ANOTHER_VOLUME reports the error. If I restart the server, then verify passes (sometimes). Yes, the Hard drive where the volumes are stored is healthy (as reported by SMART) and when diff(ed) with its off-site mirror reports no differences. The details: I have several backup sets {Projects, Documents, Code, etc...} on Ubuntu 10.04 which I backup with duplicity 0.6.17 to an internal hard drive--the primary backup target. Once a month I run duplicity in verify mode to make sure that my primary backup is still a backup. This month two sets failed. This was not a problem as have an offsite mirror that was synced 15 days ago. I ran `diff primary_backup mirror_backup` and the supposedly corrupt volumes were identical; as expected, the only differences where the backup volumes generated during the last 15 days. Weird, right? So I run duplicity verify on the primary_target and another volume-deeper in the set-reports a hash mismatch. Weirder, right? Now i run duplicity verify on the mirror: it passes. Somehow my primary backup got messes up, right? I clobber it and replace it with its older mirror and run duplicity verify to make sure all is good. It fails, the primary_backup fails. I run another `diff primary_backup mirror_backup` and now they are identical. Ok, the drive that stores the primary_backup is failing, but wait: SMART long test says it is perfectly healthy and has no bad blocks or read/write errors in its entire history (I run these once a week). I delete duplicity cache, and temp files and rerun duplicity verify on primary_target and it fails: a new volume with bad hash. Now I go brute force and run duplicity verify 10 times and each time a different volume shows a bad hash. Sometime volume 3, somtimes volume 1000. What? I now reboot the server--a few days have gone by and I've spent about 5 hrs looking at this--and I run duplicity verify on the primary_backup and it passess. What? This is the exact data that had failed 10 times? Seriously? Nevermind, this is what I expected and how things should be. Just to make sure all is good I rerun duplicity verify on all my sets and wait. What? Now two new sets report errors but the ones that originaly reported errors are fine. Help! Can we trust verify? Why does it fail sometime and pass other times? Can it be the hard drive even if diff and SMART pass? Is there a bug in duplicity or gpg? ```",22 118022589,2012-01-26 14:25:56.272,tmp directory was removed before duplicity was finished with it (lp:#922101),"[Original report](https://bugs.launchpad.net/bugs/922101) created by **sirhc (sirhc808)** ``` On Thu, Jan 26, 2012 at 8:04 AM, sirhc808@gmail.com wrote: Hello - I'm new to Duplicity and I'm likely making a user error, however I've been unable to find a path around this issue. I've built Duplicity 0.6.17 and continue to receive the error message below when I attempt my first full backup using the local file backend. Perhaps the real issue is that my backup config is invalid and is not generating a tmp file, but is exposing a minor bug. At any rate, the same backup with ""--dry-run"" does not produce the same error. Details below. Any suggestions? Thanks! -Chris debussy> duplicity --verbosity debug /volume1/sirhc file:///volume2/backup/test Using archive dir: /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc Using backup name: 353fa2a1d4f631ce7e04f457ad0ca9bc Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Failed: No module named py Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: inc ================================================================================ duplicity 0.6.17 (November 25, 2011) Args: /opt/bin/duplicity --verbosity debug /volume1/sirhc file:///volume2/backup/test Linux debussy 2.6.32.12 #1955 SMP Sat Nov 26 14:52:27 CST 2011 x86_64 /opt/bin/python 2.6.7 (r267:88850, Jun 8 2011, 23:10:54) [GCC 4.2.1] ================================================================================ Using temporary directory /tmp/duplicity-Q73Bx6-tempdir Registering (mkstemp) temporary file /tmp/duplicity-Q73Bx6-tempdir/mkstemp- qrYQ7V-1 Temp has 520413184 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: PASSPHRASE variable not set, asking user. Retype passphrase to confirm: No signatures found, switching to full backup. Using temporary directory /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc/duplicity- sA6J8t-tempdir Registering (mktemp) temporary file /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc/duplicity- sA6J8t-tempdir/mktemp-mj6VZN-1 Using temporary directory /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc/duplicity-2o67Sb- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/353fa2a1d4f631ce7e04f457ad0ca9bc/duplicity-2o67Sb- tempdir/mktemp-QkpxIF-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-Q73Bx6-tempdir/mktemp- bWLIa5-2 Selecting /volume1/sirhc GPG process 16825 terminated before wait() Comparing () and None Getting delta of (() /volume1/sirhc dir) and None A . Removing still remembered temporary file /tmp/duplicity-Q73Bx6-tempdir/mktemp-bWLIa5-2 Cleanup of temporary file /tmp/duplicity-Q73Bx6-tempdir/mktemp-bWLIa5-2 failed Removing still remembered temporary file /tmp/duplicity-Q73Bx6-tempdir/mkstemp-qrYQ7V-1 Cleanup of temporary file /tmp/duplicity-Q73Bx6-tempdir/mkstemp-qrYQ7V-1 failed Cleanup of temporary directory /tmp/duplicity-Q73Bx6-tempdir failed - this is probably a bug. Traceback (most recent call last): File ""/opt/bin/duplicity"", line 1377, in with_tempdir(main) File ""/opt/bin/duplicity"", line 1370, in with_tempdir fn() File ""/opt/bin/duplicity"", line 1345, in main full_backup(col_stats) File ""/opt/bin/duplicity"", line 500, in full_backup globals.backend) File ""/opt/bin/duplicity"", line 378, in write_multivol globals.gpg_profile, globals.volsize) File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 316, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 308, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity-Q73Bx6-tempdir/mktemp-bWLIa5-2' This is a bug! The directory was removed before duplicity was finished with it. Please report this to https://bugs.launchpad.net/duplicity. ...Thanks, ...Ken ```",44 118020031,2012-01-11 14:28:01.122,Backup restart/resume marks files deleted (lp:#914777),"[Original report](https://bugs.launchpad.net/bugs/914777) created by **krbvroc1 (kbass)** ``` Duplicity 0.6.17 Python 2.4.3 Centos 5.6 Target is custom backend Summary: When a RESTART occurs, all files prior to the restart will be marked deleted and re-backed up in subsequent backups To reproduce: 1) Kick off a full backup. Wait until several volumes have been uploaded and then hit Ctrl-C during the upload of a volume. You will probably have to hit Ctrl-C five times to abort each of the five retry attempts. The goal is to exit back to the shell. 2) Run the same backup command and let it complete. Duplicity will print something like ""RESTART: Volumes 2 to 3 failed to upload before termination. Restarting backup at volume 2."" At this point you are supposed to have a completed full backup set. 3) Run the same backup command and let it complete. This will be your first incremental. All files located in volumes prior to the RESTART will be marked as deleted. 4) Run the same backup command and let it complete. This will be your second incremental. All files marked deleted will be backed up again and marked deleted again. ```",26 118020000,2012-01-10 22:19:22.851,Successful but incomplete full backup (lp:#914504),"[Original report](https://bugs.launchpad.net/bugs/914504) created by **Alphazo (alphazo)** ``` I started an initial full backup on 408GB of photos and videos using the configuration and duply version found below. Source and destination are found on the same local system so encryption was disabled. The only significant change I applied in the configuration was the volume size of 3.5GB rather than the default 25MB. Initial backup went fine (but took a very long time) with no error reported. I then started a second backup of the same data set which has not been modified at all. The incremental backup took longer than expected and 86GB could be found in the incremental data set. Listing (duply list) and comparing the files in the initial full backup and incremental one revealed that the full backup stopped in the middle of a directory toward the end. The incremental backup just finished what should have been done by the initial full backup. For information I used the exact same configuration on a different data set that are not photos & videos (only 240GB) and the initial full backup went fine and was complete. Incremental backup was very fast with no data added. Has anyone experience such incomplete backup? Could that be linked to the 3.5GB volume size? GPG_KEY='disabled' GPG_PW='_GPG_PASSWORD_' TARGET='file:///mnt/user/BACKUPS/snapshots/photos-backup' SOURCE='/mnt/user/PHOTOS' MAX_AGE=6M VOLSIZE=3500 DUPL_PARAMS=""$DUPL_PARAMS --volsize $VOLSIZE "" TEMP_DIR=/mnt/user/duplicity-cache_tmp/tmp ARCH_DIR=/mnt/user/duplicity-cache_tmp/.cache duply 1.5.5.4 duplicity 0.6.17 python 2.6.4 gpg 1.4.10 awk 'GNU Awk 3.1.8' bash '4.1.7(2) ```",18 118019996,2012-01-09 07:06:18.470,Duplicity 0.6.17 Hardy PPA package cannot be installed (lp:#913650),"[Original report](https://bugs.launchpad.net/bugs/913650) created by **goofrider (goofrider)** ``` When installing Duplicity 0.6.17 Hardy package from PPA, I encountered the following error: =============== The following packages have unmet dependencies:   duplicity: Depends: python-pexpect (>= 2.3-1) but 2.1-1build1 is to be installed E: Broken packages =============== Manually upgrading python-pexpect to 2.3-1 caused the following error: =============== Setting up python2.4 (2.4.5-1ubuntu4) ... Setting up duplicity (0.6.17-0ubuntu0ppa9~hardy1) ... Compiling /usr/lib/python2.4/site- packages/duplicity/backends/_boto_multi.py ...   File ""/usr/lib/python2.4/site- packages/duplicity/backends/_boto_multi.py"", line 405     with FileChunkIO(filename, 'r', offset=offset * bytes, bytes=bytes) as fd:                    ^ SyntaxError: invalid syntax Compiling /usr/lib/python2.4/site-packages/duplicity/filechunkio.py ...   File ""/usr/lib/python2.4/site-packages/duplicity/filechunkio.py"", line 80     except TypeError as err:                       ^ SyntaxError: invalid syntax pycentral: pycentral pkginstall: error byte-compiling files (48) pycentral pkginstall: error byte-compiling files (48) dpkg: error processing duplicity (--configure):  subprocess post-installation script returned error exit status 1 Processing triggers for libc6 ... ldconfig deferred processing now taking place Errors were encountered while processing:  duplicity E: Sub-process /usr/bin/dpkg returned an error code (1) =============== I could try upgrade to newer version of Python (>=2.6), but it warns me about needing to upgrade libc6 so I decided not to risk breaking entire system and stick with Python2.4. In the end I have to revert back to an older version of Duplicity from Debian Lenny ``` Original tags: hardy",10 118019990,2012-01-08 10:49:34.562,Prompts for passphrase for key which doesn't require one (lp:#913375),"[Original report](https://bugs.launchpad.net/bugs/913375) created by **Adam Porter (alphapapa)** ``` I've been using Duplicity for several years off and on. I just tried 0.6.15 and 0.6.17. I use a key with an empty passphrase to sign my backups, and I use symmetric encryption with an exported passphrase. Both 0.6.15 and 06.17 prompt for a passphrase for my signing key, but when I hit enter without typing anything, it works. Previous versions didn't prompt for the passphrase. Using Kubuntu Oneiric. ```",6 118019980,2011-12-30 18:57:40.655,Crash while doing full backup (lp:#910186),"[Original report](https://bugs.launchpad.net/bugs/910186) created by **Milan Bouchet-Valat (nalimilan)** ``` I just got this crash while doing a full backup. Happened about 30 minutes after I started it... This looks very similar to bug 229457, but it was fixed years ago. I'm using duplicity 0.6.14 on Fedora 15. $ duplicity full /home/milan file:///media/VERBATIM/Sauvegardes/milan.new Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GnuPG passphrase: Retype passphrase to confirm: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1311, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1304, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1274, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 447, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 325, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib64/python2.7/site-packages/duplicity/gpg.py"", line 307, in GPGWriteFile top_off(target_size - cursize, file) File ""/usr/lib64/python2.7/site-packages/duplicity/gpg.py"", line 280, in top_off assert misc.copyfileobj(incompressible_fp, file.gpg_input, bytes) == bytes File ""/usr/lib64/python2.7/site-packages/duplicity/misc.py"", line 180, in copyfileobj outfp.write(buf) IOError: [Errno 32] Broken pipe ```",6 118019977,2011-12-27 12:05:43.358,backup process to Amazon S3 crashes after aborted previous attempt (lp:#909019),"[Original report](https://bugs.launchpad.net/bugs/909019) created by **Andrew Burdyug (buran83)** ``` The backup process to Amazon S3 crashes after aborted previous attempt, duplicity reports about problem with my secret gpg key, but there is no problem with my gpg keys and if I delete ~/.cache/duplicity/1236317e04b0824d1d84cc5d3605afa5/*part, then new full backup start as usual. Seems as a bug with gpg signed and partial aborted backups. duplicity 0.6.17 ```",6 118019968,2011-12-24 16:42:16.210,s3 upload speed: periodically drops to zero (lp:#908429),"[Original report](https://bugs.launchpad.net/bugs/908429) created by **Andrei Pozolotin (andrei-pozolotin)** ``` the following invocation duplicity \ --volsize $VOLUME_SIZE \ --s3-use-new-style \ --verbosity info \ --asynchronous-upload \ --include-globbing-filelist $INCLUDE \ --exclude-globbing-filelist $EXCLUDE \ $SOURCE $TARGET produces attached ""upload speed drops"" despite ""--asynchronous-upload"" ################################### user1@wks002:~/.duplicity$ duplicity --version duplicity 0.6.17 user1@wks002:~/.duplicity$ uname -a Linux wks002 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux ```",6 118019967,2011-12-24 16:25:03.867,s3 upload speed: duplicity vs s3cmd (lp:#908424),"[Original report](https://bugs.launchpad.net/bugs/908424) created by **Andrei Pozolotin (andrei-pozolotin)** ``` 1) I use comcast cable internet to access s3; 2) my reference upload speed as measured by http://www.speakeasy.net/speedtest/ Download Speed: 25486 kbps (3185.8 KB/sec transfer rate) Upload Speed: 4211 kbps (526.4 KB/sec transfer rate) 3) using s3cmd http://s3tools.org/s3cmd I am able to get sustained s3 upload speed close to 500 KB/sec 4) using duplicity duplicity \ --volsize $VOLUME_SIZE \ --s3-use-new-style \ --verbosity info \ --asynchronous-upload \ --include-globbing-filelist $INCLUDE \ --exclude-globbing-filelist $EXCLUDE \ $SOURCE $TARGET I get only about 250 KB/sec 5) I tried various options, volume sizes, etc still, I get only about 250 KB/sec no matter what; any suggestions, please? thank you. ################################### user1@wks002:~/.duplicity$ duplicity --version duplicity 0.6.17 user1@wks002:~/.duplicity$ uname -a Linux wks002 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux ```",6 118019963,2011-12-24 15:56:15.736,crash with --s3-use-multiprocessing (lp:#908417),"[Original report](https://bugs.launchpad.net/bugs/908417) created by **Andrei Pozolotin (andrei-pozolotin)** ``` the following invocation duplicity \ --volsize $VOLUME_SIZE \ --s3-use-new-style \ --verbosity info \ --asynchronous-upload \ --s3-use-multiprocessing \ --include-globbing-filelist $INCLUDE \ --exclude-globbing-filelist $EXCLUDE \ $SOURCE $TARGET works fine if I remove --s3-use-multiprocessing but it blows up with exception otherwise: user1@wks002:~/.duplicity$ ./save.sh Using archive dir: /home/user1/.cache/duplicity/8f7a2bc069fd6f70a8dccbf1ce8eb68a Using backup name: 8f7a2bc069fd6f70a8dccbf1ce8eb68a Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.botobackend Failed: name 'sys' is not defined Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.giobackend Succeeded Using temporary directory /tmp/duplicity-IKgSDI-tempdir User error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1377, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1370, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1221, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.6/dist-packages/duplicity/commandline.py"", line 996, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/lib/python2.6/dist-packages/duplicity/commandline.py"", line 889, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/lib/python2.6/dist-packages/duplicity/backend.py"", line 154, in get_backend raise UnsupportedBackendScheme(url_string) UnsupportedBackendScheme: scheme not supported in url: s3+http://archive- duplicity/wks002 UnsupportedBackendScheme: scheme not supported in url: s3+http://archive- duplicity/wks002 ######################## user1@wks002:~/.duplicity$ duplicity --version duplicity 0.6.17 user1@wks002:~/.duplicity$ uname -a Linux wks002 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux ```",16 118019961,2011-12-22 07:32:57.210,Signature Files Double in size upon restart (lp:#907669),"[Original report](https://bugs.launchpad.net/bugs/907669) created by **jpsb (jpsb)** ``` If a backup fails or is aborted the signature files (local) double in size before resuming uploads, this is pretty tragic when trying to do a large (140GB) backup to cloudfiles as the signature file can easily go from 5GB to 11GB uncompressed and then be impossible to upload. ```",6 118019957,2011-12-20 17:43:37.118,Misleading 'last full backup' when restoring (lp:#906995),"[Original report](https://bugs.launchpad.net/bugs/906995) created by **CiaranG (ciarang)** ``` When restoring (using version 0.6.13), I am being told this: Last full backup date: Tue Dec 13 10:46:14 2011 However, that's not the case. That's the last full backup in the primary chain, but there are secondary ones that are older. Fortunately regardless of that output, it looks like Duplicity can see the older backups, and successfully restores from that when asked to (via --restore-time). (However, there's no way of knowing it's worked, other than recognising that the restored output is as old as expected - duplicity says nothing else at all after the above output). Full collection status corresponding to the above: Found 1 secondary backup chain. Secondary chain 1 of 1: ------------------------- Chain start time: Sun Nov 27 12:54:20 2011 Chain end time: Sun Nov 27 12:54:20 2011 Number of contained backup sets: 1 Total number of contained volumes: 25 Type of backup set: Time: Num volumes: Full Sun Nov 27 12:54:20 2011 25 ------------------------- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Dec 13 10:46:14 2011 Chain end time: Tue Dec 20 03:41:31 2011 Number of contained backup sets: 8 Total number of contained volumes: 28 Type of backup set: Time: Num volumes: Full Tue Dec 13 10:46:14 2011 21 Incremental Wed Dec 14 03:42:50 2011 1 Incremental Thu Dec 15 03:42:22 2011 1 Incremental Fri Dec 16 03:41:50 2011 1 Incremental Sat Dec 17 03:42:28 2011 1 Incremental Sun Dec 18 03:41:56 2011 1 Incremental Mon Dec 19 03:41:53 2011 1 Incremental Tue Dec 20 03:41:31 2011 1 ------------------------- No orphaned or incomplete backup sets found. ```",6 118019949,2011-12-06 04:23:28.574,duplicity crashing with an Assertion error when trying to restore (lp:#900600),"[Original report](https://bugs.launchpad.net/bugs/900600) created by **Chris Stankaitis (cstankaitis)** ``` I am trying to restore a backup... duplicity keeps on crashing out with an Assertion Error please help OS: RHEL6 Duplicity ver: duplicity-0.6.14-1.el6.x86_64 Python: python-2.6.5-3.el6.x86_64 Here is the command I am using. [root@backup1.tor.fmpub.net restoretemp]# duplicity --no-encryption -t ""2011/11/28"" --archive-dir ""/backup/restoretemp"" --tempdir ""/var/tmp"" -v6 file:///backup/backup2.tor.fmpub.net/backup/xtrabackup-db-backup/bt-3306/ /backup/restoretemp/bt-restore Using archive dir: /backup/restoretemp/b6dfbc00e137c1d3f4db133984ed9231 Using backup name: b6dfbc00e137c1d3f4db133984ed9231 Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Main action: restore ================================================================================ duplicity 0.6.14 (June 18, 2011) Args: /usr/bin/duplicity --no-encryption -t 2011/11/28 --archive-dir /backup/restoretemp --tempdir /var/tmp -v6 file:///backup/backup2.tor.fmpub.net/backup/xtrabackup-db-backup/bt-3306/ /backup/restoretemp/bt-restore Linux backup1.tor.fmpub.net 2.6.32-71.7.1.el6.x86_64 #1 SMP Wed Oct 27 03:44:59 EDT 2010 x86_64 x86_64 /usr/bin/python 2.6.5 (r265:79063, Jul 14 2010, 11:36:05) [GCC 4.4.4 20100630 (Red Hat 4.4.4-10)] ================================================================================ Using temporary directory /var/tmp/duplicity-7maMxv-tempdir Temp has 108288081920 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1311, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1304, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1184, in main globals.archive_dir).set_values() File ""/usr/lib64/python2.6/site-packages/duplicity/collections.py"", line 692, in set_values backup_chains) File ""/usr/lib64/python2.6/site-packages/duplicity/collections.py"", line 705, in set_matched_chain_pair sig_chains = sig_chains and self.get_sorted_chains(sig_chains) File ""/usr/lib64/python2.6/site-packages/duplicity/collections.py"", line 918, in get_sorted_chains assert len(chain_list) == 2 AssertionError Thank You ```",6 118019947,2011-11-28 19:24:54.610,Wrong German Translation (lp:#897355),"[Original report](https://bugs.launchpad.net/bugs/897355) created by **Allo (allo)** ``` In the filename log (when verbosity is high enough), there is ""Ein /some/path"", which seems to be wrong. I think the original was ""A /some/path"", and ""Ein(e)"" is the german unspecified article like the english a/an. ``` Original tags: german translation",12 118019942,2011-11-28 10:17:46.503,man page file permissions: 600 (lp:#897147),"[Original report](https://bugs.launchpad.net/bugs/897147) created by **SanskritFritz (sanskritfritz+launchpad)** ``` Duplicity Version : 0.6.17 The man file permissions have been changed: ls -l /usr/share/man/man1/dupl* -rw------- 1 root root 13K 2011-11-25 20:20 /usr/share/man/man1/duplicity.1.gz The result is that non root users are unable to read the manpage for duplicity. The setup in archlinux is as follows: python2 setup.py install --root=""$pkgdir"" --optimize=1 ``` Original tags: man",12 118019927,2011-11-09 21:28:07.832,WebDAV backend handles 204 status code as a failure (lp:#888301),"[Original report](https://bugs.launchpad.net/bugs/888301) created by **Hugh Eaves (hugh-hugheaves)** ``` Duplicity Version: 6.15, 6.16, 7.0 Ubuntu 10.04 LTS, Apache 2.2 with mod_dav Python 2.7.2+ (default, Oct 4 2011, 20:06:09) When a backup is interrupted and then resumed later, duplicity will (correctly) attempt to re-create and overwrite the last volume created by the interrupted backup. However, when overwriting an existing file using a WebDAV PUT, Apache 2.2 returns a '204 - No Content' status instead of '201 - Created'. Even though the file uploaded correctly, the current webdavbackend.py considers anything but a ""201"" to be a failure. http://bazaar.launchpad.net/~duplicity- team/duplicity/0.7-series/view/head:/duplicity/backends/webdavbackend.py#L252 ```",12 118019923,2011-11-07 22:31:05.740,UbuntuOne not listed in command line help (lp:#887355),"[Original report](https://bugs.launchpad.net/bugs/887355) created by **P Fudd (g-ubuntu-com)** ``` Typing 'duplicity --help' does not list 'u1://host/volume_path' 'u1+http://volume_path' in version 0.6.16. Duplicity version: 0.6.16 Python version: 2.7 OS Distro and version: Fedora 14 Target: Ubuntu One Log output: ha ```",6 118019920,2011-11-06 17:47:24.152,Wishlist: Need for more details on which incremental backup is restored (lp:#886871),"[Original report](https://bugs.launchpad.net/bugs/886871) created by **Olivier Berger (olivierberger)** ``` It would be great to have duplicity restore command messages add a more descriptive message mentioning which increment was restored and not only the reference to its full backup. As it is now, it looks like only the old full backup was restored. See http://lists.nongnu.org/archive/html/duplicity- talk/2011-11/msg00030.html for more details ```",6 118019917,2011-11-03 01:09:07.339,S3 multichunk support uploads too much data (lp:#885513),"[Original report](https://bugs.launchpad.net/bugs/885513) created by **Michael Terry (mterry)** ``` If the volume being uploaded is not perfectly divisible by the S3 multichunk size, duplicity will end up uploading too much data. This appears to be because of the following code in botobackend.py: chunks = bytes / chunk_size if (bytes % chunk_size): chunks += 1 ... for n in range(chunks): params = { ... 'bytes': chunk_size, ```",8 118019913,2011-10-31 17:35:34.771,cfbackend should use the retry decorator (lp:#884345),"[Original report](https://bugs.launchpad.net/bugs/884345) created by **Scott Severance (scott.severance)** ``` I'm backing up to Rackspace, and I have quite a large backup set. Often when backing up, some sort of network exception gets raised. Sometimes, it's a connection timeout. Other times it's an SSL error. There are probably others, too, that I don't remember at the moment. Sometimes the error will occur early in the backup process. Other times it'll be near the end. Of course, all these error conditions are transient, yet DD just reports a Backend exception and gives up. Instead, it should retry a number of times, especially if it has already connected in the current session. Networks are unreliable, and DD doesn't seem to realize this. I'm currently using Ubuntu 11.10 with deja-dup 20.1-0ubuntu0.1 and duplicity 0.6.15-0ubuntu2. However, this problem has been going on for a long time and a number of versions. ```",22 118022839,2011-10-22 15:19:11.483,Backup fails after initially scanning older backups (lp:#879957),"[Original report](https://bugs.launchpad.net/bugs/879957) created by **Neil Robinson (halfhaggis+)** ``` This problem has only surfaced since upgrading to 11.10 Expected result: Backup backs up files to external hard drive as it has previously done flawlessly. Action taken: Run deja-dup. Select make new backup. Scanning starts, but stops and deja-dup reports ""Backup Failed."" Gives this error: Failed to read /tmp/duplicity-Xxfwll-tempdir/mktemp-8Ylyy0-1: (, IOError('CRC check failed 0x96cdfab != 0x8652a9f7L',), ) -- Incidentally, it restoring backups from the previous backups results in a similar error. I haven't tried restoring anything using duplicity directly. ProblemType: Bug DistroRelease: Ubuntu 11.10 Package: deja-dup 20.1-0ubuntu0.1 ProcVersionSignature: Ubuntu 3.0.0-12.20-generic 3.0.4 Uname: Linux 3.0.0-12-generic i686 ApportVersion: 1.23-0ubuntu3 Architecture: i386 Date: Sat Oct 22 15:08:13 2011 InstallationMedia: Ubuntu 10.04 LTS ""Lucid Lynx"" - Release i386 (20100429) SourcePackage: deja-dup UpgradeStatus: Upgraded to oneiric on 2011-10-16 (5 days ago) ``` Original tags: apport-bug i386 oneiric running-unity",6 118019855,2011-10-16 12:06:33.350,Backing up fails with 'IOError CRC check failed'. (lp:#875676),"[Original report](https://bugs.launchpad.net/bugs/875676) created by **Michael Terry (mterry)** ``` For 4 days déjà dup hasn't been able to perform a backup. It fails with the error Failed to read /tmp/duplicity-lJcUDl-tempdir/mktemp-o4LYSJ-1: (, IOError('CRC check failed 0x8434f7d2L != 0x3d503338L',), ) There is another similar bug #676767 where deleting ~/.cache/deja-dup helps. In this case it doesn't. I'm quite certain that my backup drive isn't corrupted. (It's a raid5.) I'd be happy to provide any additional information needed. ---------- System information: Ubuntu 11.10 deja-dup 20.0-0ubuntu3 duplicity 0.6.15-0ubuntu2 Logs: deja-dup.log: http://pastie.org/2705320 deja-dup.gsettings: http://pastie.org/2705322 ``` Original tags: xenial",136 118022306,2011-10-13 06:09:41.901,duplicity creates huge backup archives (lp:#873164),"[Original report](https://bugs.launchpad.net/bugs/873164) created by **Anders Aagaard (aagaande)** ``` Runnning this command: # duplicity --no-encryption --volsize 25 --full-if-older-than 1M --s3-european-buckets --s3-use-new-style /path/ s3+http://path # ls -lh ./.cache/duplicity/cb16c6c2ab88f146c07ca64f59318183/ total 73G -rw------- 1 root root 2.1M 2011-10-11 21:52 duplicity- full.20110919T174252Z.manifest.part -rw------- 1 root root 31G 2011-10-11 22:46 duplicity-full- signatures.20110919T174252Z.sigtar.gz -rw------- 1 root root 43G 2011-10-11 21:52 duplicity-full- signatures.20110919T174252Z.sigtar.part Most of my duplicity backups work fine, but in this directory it's not splitting the archives properly. duplicity 0.6.14 ubuntu 10.10, python 2.6.6 ```",12 118019853,2011-10-10 20:03:58.562,issue with IIS 6.5 or self-signed certificates (lp:#871982),"[Original report](https://bugs.launchpad.net/bugs/871982) created by **Langdon White (lwhite)** ``` I am running IIS 6.5 on Windows Server 2003 (don't ask why, long story) with a self-signed certificate. I cannot get duplicity to connect to some webdavs directories to do backups. I believe it is the IIS thing or the self-signed thing. I am not sure which. I can say though that cadavar connects fine (after yelling at me about the self-signed cert). Would love ideas or a fix (if its a bug and not a problem between keyboard and back of chair). ""Using WebDAV protocol http"" also seems a little weird. uname -a: Linux thor 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:17:25 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Log from verbosity 9 (with some names changed: ""user"" and ""backup-host"") Using archive dir: /home/user/.cache/duplicity/36616c01a8fe379d2d6a7b6162a3c680 Using backup name: 36616c01a8fe379d2d6a7b6162a3c680 Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Succeeded Using WebDAV host backup-host Using WebDAV directory /dav/Backups/thor/ Using WebDAV protocol http Main action: inc ================================================================================ duplicity 0.6.13 (April 02, 2011) Args: /usr/bin/duplicity --verbosity 9 --no-encryption /home/user/Documents webdavs://user@backup-host/dav/Backups/thor/ Linux thor 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:17:25 UTC 2011 x86_64 x86_64 /usr/bin/python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53) [GCC 4.5.2] ================================================================================ Using temporary directory /tmp/duplicity-5ewB0q-tempdir Registering (mkstemp) temporary file /tmp/duplicity-5ewB0q-tempdir/mkstemp- njwpv6-1 Temp has 17844174848 available, backup will use approx 34078720. Listing directory /dav/Backups/thor/ on WebDAV server WebDAV PROPFIND attempt #1 failed: 200 Listing directory /dav/Backups/thor/ on WebDAV server WebDAV PROPFIND attempt #2 failed: 400 Bad Request Listing directory /dav/Backups/thor/ on WebDAV server Removing still remembered temporary file /tmp/duplicity-5ewB0q-tempdir/mkstemp-njwpv6-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1265, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1258, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1159, in main sync_archive() File ""/usr/bin/duplicity"", line 931, in sync_archive remlist = globals.backend.list() File ""/usr/lib/python2.7/dist- packages/duplicity/backends/webdavbackend.py"", line 165, in list response = self.request(""PROPFIND"", self.directory, self.listbody) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/webdavbackend.py"", line 107, in request response = self.conn.getresponse() File ""/usr/lib/python2.7/httplib.py"", line 1015, in getresponse raise ResponseNotReady() ResponseNotReady ```",6 118019852,2011-09-25 17:25:48.946,IMAPS target doesn't work on Centos 5.6 (lp:#859054),"[Original report](https://bugs.launchpad.net/bugs/859054) created by **Alexander Akimov (aleksander-akimow)** ``` Using installed duplicity version 0.6.14, python 2.4.3, gpg 1.4.5 (Home: ~/.gnupg), awk 'GNU Awk 3.1.5', bash '3.2.25(1)-release (i686-redhat-linux- gnu)'. I get the following error: --- Start running command BKP at 20:27:43.776 --- Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1311, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1304, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1156, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.4/site-packages/duplicity/commandline.py"", line 938, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/lib/python2.4/site-packages/duplicity/commandline.py"", line 831, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/lib/python2.4/site-packages/duplicity/backend.py"", line 155, in get_backend return _backends[pu.scheme](pu) File ""/usr/lib/python2.4/site- packages/duplicity/backends/imapbackend.py"", line 66, in __init__ self._resetConnection() File ""/usr/lib/python2.4/site- packages/duplicity/backends/imapbackend.py"", line 86, in _resetConnection self._conn = cl(imap_server, 993) File ""/usr/lib/python2.4/imaplib.py"", line 1101, in __init__ IMAP4.__init__(self, host, port) File ""/usr/lib/python2.4/imaplib.py"", line 181, in __init__ self.welcome = self._get_response() File ""/usr/lib/python2.4/imaplib.py"", line 876, in _get_response resp = self._get_line() File ""/usr/lib/python2.4/imaplib.py"", line 969, in _get_line line = self.readline() File ""/usr/lib/python2.4/imaplib.py"", line 1135, in readline char = self.sslobj.read(1) sslerror: The read operation timed out 20:28:14.499 Task 'BKP' failed with exit code '30'. --- Finished state FAILED 'code 30' at 20:28:14.499 - Runtime 00:00:30.722 --- ```",6 118019849,2011-09-01 20:14:28.093,Duplicity backs up unchanged files daily (lp:#839048),"[Original report](https://bugs.launchpad.net/bugs/839048) created by **Chris Stankaitis (cstankaitis)** ``` We are backing up a netapp volume which is mounted over NFS to our backup server. duplicity seems to be backing up files which are unchanged in the daily incremental. backup line: duplicity --encrypt-key ""7049C7BA"" --archive-dir ""/var/duplicity"" --name=""bt"" --tempdir ""/var/tmp"" --exclude ""/mnt/bt/.snapshot"" -v9 --full- if-older-than 1M /mnt/bt file:///backup/vol/bt >> /tmp/backup-output-bt.txt example file: # ll static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e -rw--w--w-. 1 nobody nobody 220 May 20 2010 static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e [root@backup1]# stat static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e File: `static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e' Size: 220 Blocks: 8 IO Block: 65536 regular file Device: 1dh/29d Inode: 2780177 Links: 1 Access: (0622/-rw--w--w-) Uid: ( 99/ nobody) Gid: ( 99/ nobody) Access: 2011-08-30 01:43:35.497401000 -0400 Modify: 2010-05-20 14:50:30.000000000 -0400 Change: 2011-05-25 23:25:12.938675000 -0400 As I understand it that file should be in my full backup, and then not touched again. There are a lot of these old un-touched files which appear to be getting backed up every day wasting a lot of space on our backup server. As you can see duplicity things the file was modified as of last nights backup # cat backup-output-bt.txt.1 | grep .nfs000000000019bcc50000372e Selecting /mnt/bt/static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e Comparing ('static', 'ad_assets', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') and ('static', 'ad_assets', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') Getting delta of (('static', 'ad_assets', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') /mnt/bt/static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e reg) and (('static', 'ad_assets', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') reg) M static/ad_assets/var-2.8.0/cache/.nfs000000000019bcc50000372e Selecting /mnt/bt/static/ad_assets- backup-2010-10-19/var-2.8.0/cache/.nfs000000000019bcc50000372e Comparing ('static', 'ad_assets-backup-2010-10-19', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') and ('static', 'ad_assets- backup-2010-10-19', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') Getting delta of (('static', 'ad_assets-backup-2010-10-19', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') /mnt/bt/static/ad_assets- backup-2010-10-19/var-2.8.0/cache .nfs000000000019bcc50000372e reg) and (('static', 'ad_assets-backup-2010-10-19', 'var-2.8.0', 'cache', '.nfs000000000019bcc50000372e') reg) M static/ad_assets- backup-2010-10-19/var-2.8.0/cache/.nfs000000000019bcc50000372e # ll backup-output-bt.txt.1 -rw-rw----. 1 root root 14163064063 Aug 31 21:57 backup-output-bt.txt.1 Duplicity version: 6.11 Python version 2.6.5 OS Distro and version Redhat Enterprise Linux 6 Type of target filesystem: Linux ```",20 118019822,2011-08-19 12:53:23.913,SElinux xattrs support for duplicity (lp:#829405),"[Original report](https://bugs.launchpad.net/bugs/829405) created by **jo akweb (jo-8)** ``` (This is more a feature request than a bug report - I hope that's ok.) As of duplicity-0.6.14, it doesn't seem to support SElinux file attributes. Without these attributes preserved, it can be virtually impossible to restore a working Redhat or CentOS system, where by default SElinux is activated. It should be possible to extend duplicity to support these attributes, because rsync provides an command line switch (--xattrs) for this. ```",26 118019820,2011-08-19 06:23:59.527,Always forces full backup (lp:#829198),"[Original report](https://bugs.launchpad.net/bugs/829198) created by **Michal Čihař (nijel)** ``` I'm running daily incremental backups and monthly full ones. Since upgrade to 0.6.14 (it might also appear in 0.6.13, because I'm not sure if I did use that one), duplicity always does full backup: Reading globbing filelist /etc/duplicity.list Synchronizing remote metadata to local cache... Deleting local /var/cache/duplicity/57cfdb6f20a4104d181d228fc19db339/duplicity-full- signatures.20110818T033800Z.sigtar.gz (not authoritative at backend). Deleting local /var/cache/duplicity/57cfdb6f20a4104d181d228fc19db339/duplicity- full.20110818T033800Z.manifest (not authoritative at backend). Last full backup date: none No signatures found, switching to full backup. The collection-status reports all previous full backups correctly and restore from them works also fine. Files which did exist on backend by time the backup was run: duplicity-full-signatures.20110819T044314Z.sigtar.gpg duplicity- full.20110819T044314Z.vol1.difftar.gpg duplicity-full.20110819T044314Z.manifest.gpg The problem seems to be in fact that it no longer handles path scp://backup@backup.home.cihar.com//volume1/backup/ as absolute on the backup server: State = sftp, Before = 'Connected to backup.home.cihar.com.' sftp command: 'mkdir """"' State = sftp, Before = 'mkdir """" Couldn't create directory: Failure' sftp command: 'cd """"' State = sftp, Before = 'cd """"' sftp command: 'mkdir """"' State = sftp, Before = 'mkdir """" Couldn't create directory: Failure' sftp command: 'cd """"' State = sftp, Before = 'cd """"' sftp command: 'mkdir ""volume1""' State = sftp, Before = 'mkdir ""volume1"" Couldn't create directory: Failure' sftp command: 'cd ""volume1""' State = sftp, Before = 'cd ""volume1""' sftp command: 'mkdir ""backup""' State = sftp, Before = 'mkdir ""backup"" Couldn't create directory: Failure' sftp command: 'cd ""backup""' State = sftp, Before = 'cd ""backup""' sftp command: 'mkdir """"' State = sftp, Before = 'mkdir """" Couldn't create directory: Failure' sftp command: 'cd """"' State = sftp, Before = 'cd """"' sftp command: 'mkdir ""jabber""' State = sftp, Before = 'mkdir ""jabber"" Couldn't create directory: Failure' sftp command: 'cd ""jabber""' State = sftp, Before = 'cd ""jabber""' sftp command: 'ls -1' State = sftp, Before = 'ls -1' State = sftp, Before = 'quit' ```",18 118022495,2011-08-14 18:06:30.174,restore failed (Invalid data - SHA1 hash mismatch) (lp:#826389),"[Original report](https://bugs.launchpad.net/bugs/826389) created by **Manolis Kapernaros (kapcom01)** ``` I was using deja-dup to backup my home automatically every day on an NFS folder with password encryption. Today I decided I had to format my HDD and reinstall ubuntu so I tried to restore my files with deja-dup. The restoring starts and I get back some of my files but suddenly it fails with this error: Invalid data - SHA1 hash mismatch: Calculated hash: 7d96fd9b424f777e795c40f29f0cc88c7d87ec3d Manifest hash: a102c548ef930c7a928f9519dbc5ea5f59441ebb I tried to restore earlier backups but I get the same error. I have really important files in these backups.. I'd appreciate if someone can help me recover them. Thanks. ProblemType: Bug DistroRelease: Ubuntu 11.04 Package: deja-dup 18.1.1-0ubuntu1.1 ProcVersionSignature: Ubuntu 2.6.38-10.46-generic 2.6.38.7 Uname: Linux 2.6.38-10-generic i686 NonfreeKernelModules: nvidia wl Architecture: i386 Date: Sun Aug 14 20:58:32 2011 EcryptfsInUse: Yes InstallationMedia: Ubuntu 11.04 ""Natty Narwhal"" - Release i386 (20110427.1) ProcEnviron: LANGUAGE=el_GR:en LANG=el_GR.UTF-8 SHELL=/bin/bash SourcePackage: deja-dup UpgradeStatus: No upgrade log present (probably fresh install) ``` Original tags: apport-bug i386 natty running-unity",30 118019797,2011-07-30 15:05:55.090,Warning on initial backup unable to delete file (lp:#818547),"[Original report](https://bugs.launchpad.net/bugs/818547) created by **Dan Poirier (poirier)** ``` duplicity 0.6.14 Python 2.6.1 Mac OS X 10.6.8 Target filesystem local Mac file system Command line: rm -rf ~/.cache to duplicity -v9 --no-encryption ./from file://`pwd`/to >dup.out 2>&1 Seeing these messages with default verbosity: Unable to delete /Users/poirier/.cache/duplicity/02164a79c531c573207470764797d0af/duplicity- full-signatures.20110730T150346Z.sigtar.gz: [Errno 2] No such file or directory: '/Users/poirier/.cache/duplicity/02164a79c531c573207470764797d0af/duplicity- full-signatures.20110730T150346Z.sigtar.gz' Writing /Users/poirier/tmp/to/duplicity-full.20110730T150346Z.manifest Unable to delete /Users/poirier/.cache/duplicity/02164a79c531c573207470764797d0af/duplicity- full.20110730T150346Z.manifest: [Errno 2] No such file or directory: '/Users/poirier/.cache/duplicity/02164a79c531c573207470764797d0af/duplicity- full.20110730T150346Z.manifest' Attaching full output. ```",22 118019795,2011-07-27 12:37:58.625,Passphrase requested more than once (lp:#816954),"[Original report](https://bugs.launchpad.net/bugs/816954) created by **chrispoole (chris-chrispoole)** ``` When asked for a passphrase for a sign key, Duplicity requests is twice, to ensure it has been entered correctly. In line with other unix cli programs, where the user is assumed to know exactly what he's doing, it should be requested only once. The worst thing that could happen would be for gpg to reject the passphrase, and have to start Duplicity again. duplicity 0.6.14. ```",8 118022576,2011-07-18 13:30:37.113,tmpdir error with encryption (lp:#812278),"[Original report](https://bugs.launchpad.net/bugs/812278) created by **andreas owen (aowen)** ``` i am trying to make a encrypted backup but am getting wierd results. No files are writen in the destination folder and the log file is inconclusiv. Unencrypted backups work fine. Can someone please help? I have attached the log file, output of gpg2 --list-key an the shell script I use. This is about the only duplicity full backup that is active in the shell script. this is the terminal output: owiLand> sh /etc/duplicity/duplicity_backup.sh Traceback (most recent call last): File ""/opt/bin/duplicity"", line 1250, in with_tempdir(main) File ""/opt/bin/duplicity"", line 1243, in with_tempdir fn() File ""/opt/bin/duplicity"", line 1216, in main full_backup(col_stats) File ""/opt/bin/duplicity"", line 417, in full_backup globals.backend) File ""/opt/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 275, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 267, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity- zrwlI0-tempdir/mktemp-_gifDQ-2' sending incremental file list sent 64 bytes received 12 bytes 152.00 bytes/sec total size is 2310082 speedup is 30395.82 i have a ds211+ with the newest firmeware and duplicity installed, target of file system it ext4. ```",16 118019791,2011-07-15 19:01:04.462,duplicity doesn't work with --use-agent when the key hasn't been loaded yet (lp:#811218),"[Original report](https://bugs.launchpad.net/bugs/811218) created by **Peter Simons (simons-s)** ``` I'd like duplicity to use my gpg-agent. When I run it before my key is known to the agent, however, it just aborts with an error. Instead, I would like the agent to pop up the ""pinentry"" window on that occasion, so that I can enter the passphrase. I'm using duplicity 0.6.14. ```",6 118019790,2011-07-02 16:50:11.278,"Python version defaults to /usr/bin/python, should be /usr/bin/env python (lp:#804803)","[Original report](https://bugs.launchpad.net/bugs/804803) created by **Dan Loewenherz (dan-wp)** ``` I wrestled with an ImportError for a bit until I realized that Python wasn't being set the python version first found in PATH. Changing the first line of the main duplicity script from /usr/bin/python to /usr/bin/env python should fix the issue. ``` Original tags: python",6 118022509,2011-07-01 19:05:14.164,KeyError: backup_set.volume_name_dict[vol_num] (lp:#804484),"[Original report](https://bugs.launchpad.net/bugs/804484) created by **Martin Josefsson (josefsson-martin)** ``` I did backups on my laptop a few months ago, and now I'm trying to recover the files on my fresly installed desktop, but Deja Dup gives me an error message. It works like a charm until i hit next after the window where you select where the backups are located. I select the folder, and hit next, and it shows a progress bar but then I get this error. I tried with various snapshots and they are not encrypted. I'm attaching the error message, in case that helps. Since I am a good bug reported, I ran ""DEJA_DUP_DEBUG=1 deja-dup | tail -n 200 > /tmp/deja-dup.log"" and here is the results of that _______________________________________________________________________________________________________ DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol23.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol25.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol29.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol33.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol36.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol40.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol43.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol50.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110525T220501Z.to.20110601T220533Z.vol51.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110601T220533Z.to.20110608T143124Z.vol1.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110601T220533Z.to.20110608T143124Z.vol6.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110601T220533Z.to.20110608T143124Z.vol7.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110601T220533Z.to.20110608T143124Z.manifest is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T143124Z.to.20110608T220546Z.vol1.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T143124Z.to.20110608T220546Z.vol2.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T143124Z.to.20110608T220546Z.vol5.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T143124Z.to.20110608T220546Z.vol6.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T143124Z.to.20110608T220546Z.vol9.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity-new- signatures.20110608T143124Z.to.20110608T220546Z.sigtar.gz is not part of a known set; creating new set DUPLICITY: DEBUG 1 DUPLICITY: . Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110608T143124Z.to.20110608T220546Z.sigtar.gz' DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol2.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol3.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol4.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol6.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol12.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol14.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol16.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol17.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity- inc.20110608T220546Z.to.20110609T131140Z.vol21.difftar.gz is part of known set DUPLICITY: DEBUG 1 DUPLICITY: . Found backup chain [Tue May 17 14:33:42 2011]-[Tue May 17 14:33:42 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Tue May 17 14:33:42 2011 / end_time: Fri May 20 17:13:34 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Fri May 20 17:13:34 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Fri May 20 17:13:34 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Fri May 20 17:13:34 2011 / end_time: Thu May 26 00:05:01 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Thu May 26 00:05:01 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Thu May 26 00:05:01 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Thu May 26 00:05:01 2011 / end_time: Thu Jun 2 00:05:33 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Thu Jun 2 00:05:33 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Thu Jun 2 00:05:33 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Thu Jun 2 00:05:33 2011 / end_time: Wed Jun 8 16:31:24 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Wed Jun 8 16:31:24 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Wed Jun 8 16:31:24 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Wed Jun 8 16:31:24 2011 / end_time: Thu Jun 9 00:05:46 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Thu Jun 9 00:05:46 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Thu Jun 9 00:05:46 2011] DUPLICITY: INFO 1 DUPLICITY: . Added incremental Backupset (start_time: Thu Jun 9 00:05:46 2011 / end_time: Thu Jun 9 15:11:40 2011) DUPLICITY: DEBUG 1 DUPLICITY: . Added set Thu Jun 9 15:11:40 2011 to pre-existing chain [Tue May 17 14:33:42 2011]-[Thu Jun 9 15:11:40 2011] DUPLICITY: NOTICE 1 DUPLICITY: . Last full backup date: Tue May 17 14:33:42 2011 DUPLICITY: INFO 3 DUPLICITY: backend GIOBackend DUPLICITY: archive-dir (() /home/martin/.cache/deja- dup/d4c11c57566a96caba822aae6cf14753 dir) DUPLICITY: chain-complete DUPLICITY: full 20110517T123342Z 823 DUPLICITY: inc 20110520T151334Z 417 DUPLICITY: inc 20110525T220501Z 66 DUPLICITY: inc 20110601T220533Z 52 DUPLICITY: inc 20110608T143124Z 9 DUPLICITY: inc 20110608T220546Z 12 DUPLICITY: inc 20110609T131140Z 21 DUPLICITY: orphaned-sets-num 0 DUPLICITY: incomplete-sets-num 0 DUPLICITY: . Collection Status DUPLICITY: . ----------------- DUPLICITY: . Connecting with backend: GIOBackend DUPLICITY: . Archive dir: /home/martin/.cache/deja- dup/d4c11c57566a96caba822aae6cf14753 DUPLICITY: . DUPLICITY: . Found 0 secondary backup chains. DUPLICITY: . DUPLICITY: . Found primary backup chain with matching signature chain: DUPLICITY: . ------------------------- DUPLICITY: . Chain start time: Tue May 17 14:33:42 2011 DUPLICITY: . Chain end time: Thu Jun 9 15:11:40 2011 DUPLICITY: . Number of contained backup sets: 7 DUPLICITY: . Total number of contained volumes: 1400 DUPLICITY: . Type of backup set: Time: Num volumes: DUPLICITY: . Full Tue May 17 14:33:42 2011 823 DUPLICITY: . Incremental Fri May 20 17:13:34 2011 417 DUPLICITY: . Incremental Thu May 26 00:05:01 2011 66 DUPLICITY: . Incremental Thu Jun 2 00:05:33 2011 52 DUPLICITY: . Incremental Wed Jun 8 16:31:24 2011 9 DUPLICITY: . Incremental Thu Jun 9 00:05:46 2011 12 DUPLICITY: . Incremental Thu Jun 9 15:11:40 2011 21 DUPLICITY: . ------------------------- DUPLICITY: . No orphaned or incomplete backup sets found. ** (deja-dup:6699): DEBUG: DuplicityInstance.vala:575: duplicity (6790) exited with value 30 DUPLICITY: DEBUG 1 DUPLICITY: . Removing still remembered temporary file /tmp/duplicity- lwxhdv-tempdir/mkstemp-5j_Bcz-1 DUPLICITY: ERROR 30 KeyError DUPLICITY: . Traceback (most recent call last): DUPLICITY: . File ""/usr/bin/duplicity"", line 1262, in DUPLICITY: . with_tempdir(main) DUPLICITY: . File ""/usr/bin/duplicity"", line 1255, in with_tempdir DUPLICITY: . fn() DUPLICITY: . File ""/usr/bin/duplicity"", line 1209, in main DUPLICITY: . restore(col_stats) DUPLICITY: . File ""/usr/bin/duplicity"", line 539, in restore DUPLICITY: . restore_get_patched_rop_iter(col_stats)): DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 521, in Write_ROPaths DUPLICITY: . for ropath in rop_iter: DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 493, in integrate_patch_iters DUPLICITY: . for patch_seq in collated: DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 378, in yield_tuples DUPLICITY: . setrorps( overflow, elems ) DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 367, in setrorps DUPLICITY: . elems[i] = iter_list[i].next() DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 112, in difftar2path_iter DUPLICITY: . tarinfo_list = [tar_iter.next()] DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 328, in next DUPLICITY: . self.set_tarfile() DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/patchdir.py"", line 322, in set_tarfile DUPLICITY: . self.current_fp = self.fileobj_iter.next() DUPLICITY: . File ""/usr/bin/duplicity"", line 575, in get_fileobj_iter DUPLICITY: . backup_set.volume_name_dict[vol_num], DUPLICITY: . KeyError: 1 DUPLICITY: . ________________________________________________________________________________ ""dpkg-query -W deja-dup duplicity"" gives me: deja-dup 18.1.1-0ubuntu1.1 duplicity 0.6.13-0ubuntu1 ""gconftool-2 --dump /apps/deja-dup > /tmp/deja-dup.settings"" gives me: And I'm running Ubuntu 11.04. ```",4 118019767,2011-06-07 12:30:28.590,Unable to connect to Rackspace UK servers (lp:#793997),"[Original report](https://bugs.launchpad.net/bugs/793997) created by **Tor Inge Schulstad (tor-inge-schulstad)** ``` I have an acconunt at Rackspace, but I use the London servers. I log on to https://lon.manage.rackspacecloud.com I'm not able to use this account to store my backups. I only get AuthenticationFailed. Software versions: deja-dup 18.1.1-0ubuntu1 duplicity 0.6.13-0ubuntu1 From logfile: DUPLICITY: INFO 1 DUPLICITY: . Import of duplicity.backends.webdavbackend Succeeded DUPLICITY: ERROR 38 DUPLICITY: . Connection failed, please check your credentials: AuthenticationFailed Settings: org.gnome.DejaDup backend 'rackspace' org.gnome.DejaDup delete-after 56 org.gnome.DejaDup encrypt true org.gnome.DejaDup exclude-list ['$TRASH', '$DOWNLOAD'] org.gnome.DejaDup include-list ['/media/WD_ELEMENTS/bilder/2011'] org.gnome.DejaDup last-run '' org.gnome.DejaDup periodic true org.gnome.DejaDup periodic-period 14 org.gnome.DejaDup root-prompt true org.gnome.DejaDup.File icon '' org.gnome.DejaDup.File name '' org.gnome.DejaDup.File path '' org.gnome.DejaDup.File relpath@ ay [] org.gnome.DejaDup.File short-name '' org.gnome.DejaDup.File type 'normal' org.gnome.DejaDup.File uuid '' org.gnome.DejaDup.Rackspace container 'dejadup' org.gnome.DejaDup.Rackspace username 'tischulstad' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder 'workstation' org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.U1 folder '/deja-dup/$HOSTNAME' Distro: Ubuntu 11.04 ```",30 118019757,2011-05-17 15:49:26.988,AssertionError after incomplete incremental backup (lp:#784098),"[Original report](https://bugs.launchpad.net/bugs/784098) created by **Jens Finkhäuser (finkhaeuser-consulting)** ``` Duplicity version: 0.6.08b Python version: 2.6.5 OS Distro and version: Ubuntu 10.04 server Local and Remote metadata are synchronized, no sync needed. Last inc backup left a partial set, restarting. Last full backup date: Thu Mar 10 04:47:43 2011 Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1239, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1232, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1214, in main     incremental_backup(sig_chain)   File ""/usr/bin/duplicity"", line 474, in incremental_backup     assert dup_time.curtime != dup_time.prevtime, ""time not moving forward at appropriate pace - system clock issues?"" AssertionError: time not moving forward at appropriate pace - system clock issues? Looking at the code, duplicity seems to sleep just before line 474, and then requests curtime again. It's not clear how that value should have changed in the meantime; the value returned is from the global curtime variable in dup_time.py. Without understanding the rest of the code further, this looks like a bug in the setcurtime() function; clearly the line ""t = time_in_secs or curtime or long(time.time()) # only set from NOW once"" fails to set the current time from NOW in this instance, which is presumably what is expected. ```",6 118019754,2011-04-27 09:06:23.217,Duplicity crashes on cleanup delete (lp:#771704),"[Original report](https://bugs.launchpad.net/bugs/771704) created by **Steve Hand (steve-zzs)** ``` Running duplicity 0.6.13 on my S3 backupset after I was warned of orphaned backups. Python 2.4.3 Centos 5.6 duplicity -v9 cleanup --force s3+http://sticky- backup.zzs.co/server/share/Customers Using archive dir: /root/.cache/duplicity/260dec5c57853f7f26336ba040c6ab33 Using backup name: 260dec5c57853f7f26336ba040c6ab33 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.tahoebackend Succeeded Main action: cleanup ================================================================================ duplicity 0.6.13 (April 02, 2011) Args: /usr/bin/duplicity -v9 cleanup --force s3+http://sticky- backup.zzs.co/server/share/Customers Linux backup.zzs.co 2.6.18-238.5.1.el5 #1 SMP Fri Apr 1 18:41:58 EDT 2011 x86_64 x86_64 /usr/bin/python 2.4.3 (#1, Mar 5 2011, 21:26:05) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] ================================================================================ Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 3 files exist in cache Extracting backup chains from list of files: ['duplicity- inc.20110426T141842Z.to.20110427T040006Z.manifest.part', 'duplicity-new- signatures.20110426T141842Z.to.20110427T040006Z.sigtar.part', 'duplicity- new-signatures.20110426T103039Z.to.20110426T141842Z.sigtar.part'] File duplicity-inc.20110426T141842Z.to.20110427T040006Z.manifest.part is not part of a known set; creating new set File duplicity-new- signatures.20110426T141842Z.to.20110427T040006Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110426T141842Z.to.20110427T040006Z.sigtar.part' File duplicity-new- signatures.20110426T103039Z.to.20110426T141842Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110426T103039Z.to.20110426T141842Z.sigtar.part' Found orphaned set Wed Apr 27 05:00:06 2011 Warning, found the following local orphaned signature files: duplicity-new-signatures.20110426T103039Z.to.20110426T141842Z.sigtar.part duplicity-new-signatures.20110426T141842Z.to.20110427T040006Z.sigtar.part Warning, found the following orphaned backup file: [duplicity-inc.20110426T141842Z.to.20110427T040006Z.manifest.part] Last full backup date: none Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: /root/.cache/duplicity/260dec5c57853f7f26336ba040c6ab33 Found 0 secondary backup chains. No backup chains with active signatures found Also found 1 backup set not part of any chain, and 0 incomplete backup sets. These may be deleted by running duplicity with the ""cleanup"" command. Deleting these files from backend: duplicity-new-signatures.20110426T103039Z.to.20110426T141842Z.sigtar.part duplicity-new-signatures.20110426T141842Z.to.20110427T040006Z.sigtar.part duplicity-inc.20110426T141842Z.to.20110427T040006Z.manifest.part Using temporary directory /tmp/duplicity-hBHUX6-tempdir Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1250, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1243, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1205, in main cleanup(col_stats) File ""/usr/bin/duplicity"", line 699, in cleanup col_stats.backend.delete(ext_remote) File ""/usr/lib64/python2.4/site- packages/duplicity/backends/botobackend.py"", line 295, in delete self.bucket.delete_key(self.key_prefix + filename) AttributeError: 'NoneType' object has no attribute 'delete_key' ```",6 118019734,2011-04-21 17:38:16.174,Don't write new incremental set if nothing changed (lp:#768481),"[Original report](https://bugs.launchpad.net/bugs/768481) created by **Adam Porter (alphapapa)** ``` I have a script that calls Duplicity separately on smaller sets of data so that important ones get backed up first. Some of those early sets rarely change, but even if nothing changes, Duplicity uploads a new incremental set of files. This not only wastes time when backing up, but it causes a LOT of time to be wasted when doing a restore. For example, a few days ago I needed to restore three small files from a small backup set that hadn't changed in years. But to do so, Duplicity had to download probably over a hundred incremental sets in which nothing changed. This took a LONG time! It's especially slow since, judging from Duplicity's verbose output, it downloads one file at a time--the latency between commands adds up quickly (I'm ignorant about SFTP: is there no way to request more than one file at a time?). Restoring less than 20 KB of data took over 10 minutes because of all the empty incremental sets that had to be downloaded. I realize that it's important to indicate when incremental backups were made, even if nothing changed, so I think Duplicity should store empty incremental sets in a different way. For example, it could store a single file in the remote directory that lists all the times incremental backups were made in which nothing changed. When making a new, unchanged incremental backup, Duplicity could simply append the date and time to the file and replace the copy on the server. This would save so much time it's not even funny! :) ```",58 118019730,2011-04-18 07:07:24.113,duplicity should fail gracefully on empty GPG signature file (lp:#764301),"[Original report](https://bugs.launchpad.net/bugs/764301) created by **Olivier Berger (oberger)** ``` (This is a copy of http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=623184) Here's what I get with an empty remote GPG file (probably resulting of out of quota), when trying to resync the local cache : ... Fetching .../duplicity- inc.20101225T001425Z.to.20101226T001506Z.manifest.gpg to /tmp/duplicity- mqzqB7-tempdir/mktemp-f0fGqJ-7' State = sftp, Before = 'quit' Removing still remembered temporary file /tmp/duplicity- mqzqB7-tempdir/mkstemp-xGZcgl-1 Removing still remembered temporary file /tmp/duplicity- mqzqB7-tempdir/mktemp-f0fGqJ-7 GPG error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1145, in main sync_archive() File ""/usr/bin/duplicity"", line 959, in sync_archive copy_to_local(fn) File ""/usr/bin/duplicity"", line 915, in copy_to_local globals.archive_dir.append(loc_name).name) File ""/usr/bin/duplicity"", line 841, in copy_raw data = src_iter.next(block_size).data File ""/usr/bin/duplicity"", line 900, in next self.fileobj.close() File ""/usr/lib/python2.6/dist-packages/duplicity/dup_temp.py"", line 210, in close assert not self.fileobj.close() File ""/usr/lib/python2.6/dist-packages/duplicity/gpg.py"", line 198, in close self.gpg_failed() File ""/usr/lib/python2.6/dist-packages/duplicity/gpg.py"", line 165, in gpg_failed raise GPGError, msg GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: decrypt_message failed: eof ===== End GnuPG log ===== GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: decrypt_message failed: eof ===== End GnuPG log ===== Of course, by listing the cache one can find the culprit 0 size file : duplicity-inc.20101225T001425Z.to.20101226T001506Z.manifest.gpg But I guess it wouldn't be so hard to have a check against empty files and manage the failing backup/restore then. Hope this helps. Best regards, ```",14 118019721,2011-04-11 21:00:58.007,non-interactive duplicity job using GPG keys crashes (lp:#758077),"[Original report](https://bugs.launchpad.net/bugs/758077) created by **Joshua Jensen (joshua-joshuajensen)** ``` I'm running a duplicity nightly backup from cron that is having problems. I'm using a GPG keypair. The public key is used to encrypt without a PASSPHRASE or interaction, and the private key to decrypt and restore interactively with a PASSPHRASE later if necessary. This works from my command prompt *where I am not prompted for anything when running this command*. However, from cron, this fails with getpass.py being uncomfortable about not being able to control echo on the terminal: ARGS=""--asynchronous-upload --tempdir /tmp --no-print-statistics \ --include /data/backups/bin --encrypt-key CE71E9DC \ --exclude-globbing-filelist $DEST/bin/rdiff-backup-excludes \ -v2 --volsize 100 $@"" duplicity $ARGS / file://$DESTDIR /usr/lib64/python2.6/getpass.py:83: GetPassWarning: Can not control echo on the terminal. passwd = fallback_getpass(prompt, stream) Warning: Password input may be echoed. GnuPG passphrase: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1245, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1238, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1139, in main sync_archive() File ""/usr/bin/duplicity"", line 949, in sync_archive globals.gpg_profile.passphrase = get_passphrase(1, ""sync"") File ""/usr/bin/duplicity"", line 129, in get_passphrase pass1 = getpass.getpass(""GnuPG passphrase: "") File ""/usr/lib64/python2.6/getpass.py"", line 83, in unix_getpass passwd = fallback_getpass(prompt, stream) File ""/usr/lib64/python2.6/getpass.py"", line 118, in fallback_getpass return _raw_input(prompt, stream) File ""/usr/lib64/python2.6/getpass.py"", line 135, in _raw_input raise EOFError EOFError duplicity version 0.6.11, Python 2.6.5, RHEL 6.0 x86_64, target filesystem is an NFS mount Surely non-interactive duplicity commands shouldn't fail because duplicity can't go interactive!! ```",18 118019709,2011-04-09 15:08:09.813,Device numbers overflow on Linux x86_64 (lp:#755583),"[Original report](https://bugs.launchpad.net/bugs/755583) created by **Nate Eldredge (nate-thatsmathematics)** ``` In trying to use Duplicity to archive some backups from another OS, I found that it fails on device files with certain major/minor numbers. This is on Linux Ubuntu maverick x86_64; I cannot reproduce it on 32-bit x86. I've attached a log with the error message. This is Duplicity 0.6.13, using the ubuntu package 0.6.13-0ubuntu1~maverick1. Python is 2.6.6, using the ubuntu package 2.6.6-2ubuntu2. The filesystem is ext4. It seems this happens when the device file's st_rdev exceeds 2^31. The major and minor numbers are packed into this field in a funny way so it takes a little thought to see how to reproduce this. The encoding goes as follows: the 32 bits of st_rdev from high to low are: MIN[19:8] MAJ[11:0] MIN[7:0] (Actually it looks like st_rdev is really 64 bits, encoded as MAJ[31:12] MIN[31:8] MAJ[11:0] MIN[7:0], but it is not currently possible to create a device with the high 32 bits nonzero.) So we have a 12-bit major number and a 20-bit minor number. It appears duplicity fails when bit 20 of the minor number is set. E.g. 'mknod foo c 0xfff 0xfffff' or 'mknod foo c 0x000 0x80000'. Using the --exclude-device-files options does NOT prevent duplicity from failing. ```",6 118019700,2011-04-09 13:57:33.658,Child processes go defunct (lp:#755545),"[Original report](https://bugs.launchpad.net/bugs/755545) created by **Wladimir J. van der Laan (laanwj)** ``` I'm doing a verify of a fairly large remote backup and see this in the process list. It seems like the child gpg processes go defunct and are not properly cleaned up, and there's many of them. root 7545 17.0 1.5 16116 12244 pts/4 S+ 10:37 52:41 \_ /usr/bin/python /usr/bin/duplicity verify -v4 --include-globbing-filelis root 7592 0.0 0.0 0 0 pts/4 Z+ 10:38 0:01 \_ [gpg] root 7625 0.0 0.0 0 0 pts/4 Z+ 10:39 0:00 \_ [gpg] root 7632 0.0 0.0 0 0 pts/4 Z+ 10:39 0:00 \_ [gpg] root 7637 0.0 0.0 0 0 pts/4 Z+ 10:39 0:00 \_ [gpg] root 7644 0.0 0.0 0 0 pts/4 Z+ 10:39 0:00 \_ [gpg] root 7651 0.0 0.0 0 0 pts/4 Z+ 10:39 0:00 \_ [gpg] root 7659 0.0 0.0 0 0 pts/4 Z+ 10:40 0:00 \_ [gpg] root 7684 0.0 0.0 0 0 pts/4 Z+ 10:40 0:02 \_ [gpg] root 7691 0.0 0.0 0 0 pts/4 Z+ 10:41 0:02 \_ [gpg] root 7698 0.0 0.0 0 0 pts/4 Z+ 10:41 0:02 \_ [gpg] root 7707 0.0 0.0 0 0 pts/4 Z+ 10:41 0:03 \_ [gpg] root 7714 0.0 0.0 0 0 pts/4 Z+ 10:41 0:03 \_ [gpg] root 7736 0.0 0.0 0 0 pts/4 Z+ 10:42 0:02 \_ [gpg] root 7743 0.0 0.0 0 0 pts/4 Z+ 10:42 0:02 \_ [gpg] root 7752 0.0 0.0 0 0 pts/4 Z+ 10:42 0:02 \_ [gpg] root 7763 0.0 0.0 0 0 pts/4 Z+ 10:42 0:03 \_ [gpg] root 7770 0.0 0.0 0 0 pts/4 Z+ 10:42 0:03 \_ [gpg] root 7777 0.0 0.0 0 0 pts/4 Z+ 10:43 0:01 \_ [gpg] root 7782 0.0 0.0 0 0 pts/4 Z+ 10:43 0:00 \_ [gpg] root 7787 0.0 0.0 0 0 pts/4 Z+ 10:43 0:00 \_ [gpg] root 7796 0.0 0.1 4580 1472 pts/4 SL+ 10:43 0:02 \_ gpg --status-fd 98 --passphrase-fd 102 --logger-fd 95 --batch --no-t root 7803 0.0 0.0 0 0 pts/4 Z+ 10:43 0:00 \_ [gpg] root 7808 0.0 0.1 4580 1468 pts/4 SL+ 10:43 0:00 \_ gpg --status-fd 106 --passphrase-fd 110 --logger-fd 103 --batch --no root 7815 0.0 0.0 0 0 pts/4 Z+ 10:44 0:01 \_ [gpg] root 7820 0.0 0.0 0 0 pts/4 Z+ 10:44 0:00 \_ [gpg] root 7825 0.0 0.0 0 0 pts/4 Z+ 10:44 0:00 \_ [gpg] root 7836 0.0 0.0 0 0 pts/4 Z+ 10:44 0:04 \_ [gpg] root 7843 0.0 0.0 0 0 pts/4 Z+ 10:44 0:02 \_ [gpg] root 7850 0.0 0.0 0 0 pts/4 Z+ 10:44 0:02 \_ [gpg] root 7870 0.0 0.0 0 0 pts/4 Z+ 10:45 0:02 \_ [gpg] root 7877 0.0 0.0 0 0 pts/4 Z+ 10:45 0:02 \_ [gpg] root 7884 0.0 0.0 0 0 pts/4 Z+ 10:45 0:04 \_ [gpg] root 7893 0.0 0.0 0 0 pts/4 Z+ 10:45 0:05 \_ [gpg] root 7903 0.0 0.0 0 0 pts/4 Z+ 10:46 0:04 \_ [gpg] root 7911 0.0 0.0 0 0 pts/4 Z+ 10:46 0:03 \_ [gpg] root 7920 0.0 0.0 0 0 pts/4 Z+ 10:46 0:02 \_ [gpg] root 7929 0.0 0.0 0 0 pts/4 Z+ 10:46 0:04 \_ [gpg] root 7938 0.0 0.0 0 0 pts/4 Z+ 10:47 0:02 \_ [gpg] root 7945 0.0 0.0 0 0 pts/4 Z+ 10:47 0:02 \_ [gpg] root 7952 0.0 0.0 0 0 pts/4 Z+ 10:47 0:02 \_ [gpg] root 7961 0.0 0.0 0 0 pts/4 Z+ 10:47 0:02 \_ [gpg] root 7968 0.0 0.1 4580 1468 pts/4 SL+ 10:47 0:02 \_ gpg --status-fd 182 --passphrase-fd 186 --logger-fd 179 --batch --no root 7991 0.0 0.0 0 0 pts/4 Z+ 10:48 0:04 \_ [gpg] root 8001 0.0 0.0 0 0 pts/4 Z+ 10:48 0:02 \_ [gpg] root 8008 0.0 0.0 0 0 pts/4 Z+ 10:48 0:02 \_ [gpg] root 8034 0.0 0.0 0 0 pts/4 Z+ 10:49 0:02 \_ [gpg] root 8043 0.0 0.1 4580 1468 pts/4 SL+ 10:49 0:04 \_ gpg --status-fd 206 --passphrase-fd 210 --logger-fd 203 --batch --no root 8050 0.0 0.0 0 0 pts/4 Z+ 10:49 0:02 \_ [gpg] root 8057 0.0 0.0 0 0 pts/4 Z+ 10:50 0:01 \_ [gpg] root 8064 0.0 0.0 0 0 pts/4 Z+ 10:50 0:03 \_ [gpg] root 8071 0.0 0.0 0 0 pts/4 Z+ 10:50 0:01 \_ [gpg] root 8080 0.0 0.0 0 0 pts/4 Z+ 10:50 0:01 \_ [gpg] root 8087 0.0 0.0 0 0 pts/4 Z+ 10:50 0:01 \_ [gpg] root 8109 0.0 0.0 0 0 pts/4 Z+ 10:51 0:04 \_ [gpg] root 8116 0.0 0.0 0 0 pts/4 Z+ 10:51 0:02 \_ [gpg] root 8125 0.0 0.0 0 0 pts/4 Z+ 10:51 0:02 \_ [gpg] root 8134 0.0 0.0 0 0 pts/4 Z+ 10:51 0:02 \_ [gpg] root 8143 0.0 0.0 0 0 pts/4 Z+ 10:52 0:03 \_ [gpg] root 8150 0.0 0.0 0 0 pts/4 Z+ 10:52 0:03 \_ [gpg] root 8159 0.0 0.1 4580 1468 pts/4 SL+ 10:52 0:05 \_ gpg --status-fd 258 --passphrase-fd 262 --logger-fd 255 --batch --no root 8168 0.0 0.0 0 0 pts/4 Z+ 10:52 0:05 \_ [gpg] root 8177 0.0 0.0 0 0 pts/4 Z+ 10:53 0:05 \_ [gpg] root 8186 0.0 0.0 0 0 pts/4 Z+ 10:53 0:05 \_ [gpg] root 8197 0.0 0.0 0 0 pts/4 Z+ 10:53 0:06 \_ [gpg] root 8236 0.0 0.0 0 0 pts/4 Z+ 10:54 0:05 \_ [gpg] root 8245 0.0 0.0 0 0 pts/4 Z+ 10:54 0:05 \_ [gpg] root 8254 0.0 0.0 0 0 pts/4 Z+ 10:55 0:05 \_ [gpg] root 8263 0.0 0.0 0 0 pts/4 Z+ 10:55 0:05 \_ [gpg] root 8272 0.0 0.1 4580 1468 pts/4 SL+ 10:55 0:05 \_ gpg --status-fd 298 --passphrase-fd 302 --logger-fd 295 --batch --no root 8281 0.0 0.0 0 0 pts/4 Z+ 10:56 0:05 \_ [gpg] root 8294 0.0 0.0 0 0 pts/4 Z+ 10:56 0:05 \_ [gpg] root 8305 0.0 0.0 0 0 pts/4 Z+ 10:56 0:05 \_ [gpg] root 8328 0.0 0.0 0 0 pts/4 Z+ 10:57 0:05 \_ [gpg] root 8342 0.0 0.0 0 0 pts/4 Z+ 10:57 0:14 \_ [gpg] root 8350 0.0 0.0 0 0 pts/4 Z+ 10:57 0:05 \_ [gpg] root 8360 0.0 0.0 0 0 pts/4 Z+ 10:58 0:05 \_ [gpg] root 8369 0.0 0.0 0 0 pts/4 Z+ 10:58 0:04 \_ [gpg] root 8378 0.0 0.0 0 0 pts/4 Z+ 10:58 0:04 \_ [gpg] root 8387 0.0 0.0 0 0 pts/4 Z+ 10:58 0:04 \_ [gpg] root 8396 0.0 0.0 0 0 pts/4 Z+ 10:59 0:05 \_ [gpg] root 8403 0.0 0.0 0 0 pts/4 Z+ 10:59 0:04 \_ [gpg] root 8410 0.0 0.0 0 0 pts/4 Z+ 10:59 0:00 \_ [gpg] root 8415 0.0 0.0 0 0 pts/4 Z+ 10:59 0:00 \_ [gpg] root 8420 0.0 0.0 0 0 pts/4 Z+ 10:59 0:00 \_ [gpg] root 8427 0.0 0.0 0 0 pts/4 Z+ 10:59 0:00 \_ [gpg] root 8447 0.0 0.0 0 0 pts/4 Z+ 11:00 0:00 \_ [gpg] root 8462 0.0 0.0 0 0 pts/4 Z+ 11:00 0:13 \_ [gpg] root 8473 0.0 0.1 4580 1464 pts/4 SL+ 11:00 0:01 \_ gpg --status-fd 374 --passphrase-fd 378 --logger-fd 371 --batch --no root 8480 0.0 0.1 4580 1468 pts/4 SL+ 11:01 0:01 \_ gpg --status-fd 378 --passphrase-fd 382 --logger-fd 375 --batch --no root 8502 0.0 0.0 0 0 pts/4 Z+ 11:01 0:00 \_ [gpg] root 8509 0.0 0.0 0 0 pts/4 Z+ 11:02 0:01 \_ [gpg] root 8514 0.0 0.0 0 0 pts/4 Z+ 11:02 0:00 \_ [gpg] root 8521 0.0 0.0 0 0 pts/4 Z+ 11:02 0:01 \_ [gpg] root 8530 0.0 0.0 0 0 pts/4 Z+ 11:02 0:00 \_ [gpg] root 8541 0.0 0.0 0 0 pts/4 Z+ 11:02 0:06 \_ [gpg] root 8561 0.0 0.0 0 0 pts/4 Z+ 11:03 0:00 \_ [gpg] root 8569 0.0 0.0 0 0 pts/4 Z+ 11:03 0:00 \_ [gpg] root 12032 0.0 0.1 4580 1468 pts/4 SL+ 12:56 0:07 \_ gpg --status-fd 424 --passphrase-fd 432 --logger-fd 8 --batch --no-t root 12211 0.1 0.1 4580 1472 pts/4 SL+ 13:02 0:09 \_ gpg --status-fd 280 --passphrase-fd 419 --logger-fd 6 --batch --no-t root 12751 0.0 0.0 0 0 pts/4 Z+ 13:21 0:03 \_ [gpg] root 13871 0.1 0.0 0 0 pts/4 Z+ 13:57 0:08 \_ [gpg] root 17177 2.9 0.1 4784 1472 pts/0 Ss+ 15:45 0:00 \_ /usr/bin/sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 orion root 17178 6.0 0.4 6280 3472 pts/0 S+ 15:45 0:01 \_ /usr/bin/ssh -oForwardX11 no -oForwardAgent no -oPermitLocalComm Duplicity version: 0.6.13 Python version: 2.5.2 OS distro and version: Ubuntu 8.04 LTS Type of filesystem: reiserfs ```",6 118022505,2011-04-05 09:44:07.602,Fatal error: KeyError: 1 while restoring incremental backup (lp:#751178),"[Original report](https://bugs.launchpad.net/bugs/751178) created by **ChieftainY2k (chieftainy2k)** ``` I've just tried the restore procedure, but it seems that it fails under ubuntu with the latest version of duplicity installed (from tar.gz): root@chieftainy2k:~# uname -a Linux chieftainy2k 2.6.35-28-generic #49-Ubuntu SMP Tue Mar 1 14:39:03 UTC 2011 x86_64 GNU/Linux root@chieftainy2k:~# python -V Python 2.6.6 root@chieftainy2k:~# duplicity -V duplicity 0.6.13 root@chieftainy2k:/tmp/restore# duplicity -v9 restore ftp://USER@SERVER/backup/DIR ./ ........ (cut) ........... Collection Status ----------------- Connecting with backend: FTPBackend Archive dir: /home/chieftainy2k/.cache/duplicity/3d571ef397a9af4d38eaa5356d94eacd Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sat Mar 5 13:58:46 2011 Chain end time: Wed Mar 30 17:21:39 2011 Number of contained backup sets: 17 Total number of contained volumes: 320 Type of backup set: Time: Num volumes: Full Sat Mar 5 13:58:46 2011 234 Incremental Mon Mar 7 15:49:41 2011 7 Incremental Mon Mar 7 18:39:47 2011 5 Incremental Tue Mar 8 18:09:43 2011 2 Incremental Wed Mar 9 11:55:35 2011 11 Incremental Wed Mar 9 18:02:20 2011 2 Incremental Fri Mar 11 12:25:15 2011 16 Incremental Mon Mar 14 10:42:37 2011 12 Incremental Tue Mar 15 17:51:25 2011 4 Incremental Fri Mar 18 17:06:37 2011 5 Incremental Mon Mar 21 17:01:32 2011 5 Incremental Wed Mar 23 16:47:17 2011 4 Incremental Thu Mar 24 17:00:35 2011 2 Incremental Sat Mar 26 17:34:12 2011 2 Incremental Sun Mar 27 17:07:15 2011 2 Incremental Mon Mar 28 19:05:42 2011 3 Incremental Wed Mar 30 17:21:39 2011 4 ------------------------- No orphaned or incomplete backup sets found. Removing still remembered temporary file /tmp/duplicity-gil1cb- tempdir/mkstemp-vb6llm-1 Removing still remembered temporary file /tmp/duplicity-gil1cb- tempdir/mkstemp-ycLH7u-2 Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1250, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1243, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1197, in main restore(col_stats) File ""/usr/local/bin/duplicity"", line 539, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 521, in Write_ROPaths for ropath in rop_iter: File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 493, in integrate_patch_iters for patch_seq in collated: File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 378, in yield_tuples setrorps( overflow, elems ) File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 367, in setrorps elems[i] = iter_list[i].next() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 112, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 328, in next self.set_tarfile() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 322, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/local/bin/duplicity"", line 575, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 1 What am I doing wrong ? ```",6 118019688,2011-04-04 18:17:01.197,"duplicity crashes when attempted to read a ""write only"" file (lp:#750570)","[Original report](https://bugs.launchpad.net/bugs/750570) created by **Joshua Jensen (joshua-joshuajensen)** ``` When doing a backup of my machine, duplicity dies on reading in the files in /selinux This is not a fake or mounted filesystem like say /sys or /proc, it is a real ext4 fs: $ ls -l /selinux/ total 0 -rw-rw-rw-. 1 root root 0 Mar 17 16:59 access dr-xr-xr-x. 2 root root 0 Mar 17 16:59 avc dr-xr-xr-x. 2 root root 0 Mar 17 16:59 booleans -rw-r--r--. 1 root root 0 Mar 17 16:59 checkreqprot dr-xr-xr-x. 79 root root 0 Mar 17 16:59 class --w-------. 1 root root 0 Mar 17 16:59 commit_pending_bools -rw-rw-rw-. 1 root root 0 Mar 17 16:59 context -rw-rw-rw-. 1 root root 0 Mar 17 16:59 create -r--r--r--. 1 root root 0 Mar 17 16:59 deny_unknown --w-------. 1 root root 0 Mar 17 16:59 disable -rw-r--r--. 1 root root 0 Mar 17 16:59 enforce dr-xr-xr-x. 2 root root 0 Mar 17 16:59 initial_contexts -rw-------. 1 root root 0 Mar 17 16:59 load -rw-rw-rw-. 1 root root 0 Mar 17 16:59 member -r--r--r--. 1 root root 0 Mar 17 16:59 mls crw-rw-rw-. 1 root root 1, 3 Mar 17 16:59 null dr-xr-xr-x. 2 root root 0 Mar 17 16:59 policy_capabilities -r--r--r--. 1 root root 0 Mar 17 16:59 policyvers -r--r--r--. 1 root root 0 Mar 17 16:59 reject_unknown -rw-rw-rw-. 1 root root 0 Mar 17 16:59 relabel -rw-rw-rw-. 1 root root 0 Mar 17 16:59 user Notice the ""write-only"" files... which duplicity can't handle: A selinux/class/x_synthetic_event/perms/receive A selinux/class/x_synthetic_event/perms/send A selinux/commit_pending_bools Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1245, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1238, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1216, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 417, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib64/python2.6/site-packages/duplicity/gpg.py"", line 279, in GPGWriteFile data = block_iter.next(min(block_size, bytes_to_go)).data File ""/usr/lib64/python2.6/site-packages/duplicity/diffdir.py"", line 505, in next result = self.process(self.input_iter.next(), size) File ""/usr/lib64/python2.6/site-packages/duplicity/diffdir.py"", line 631, in process data, last_block = self.get_data_block(fp, size - 512) File ""/usr/lib64/python2.6/site-packages/duplicity/diffdir.py"", line 658, in get_data_block buf = fp.read(read_size) File ""/usr/lib64/python2.6/site-packages/duplicity/diffdir.py"", line 415, in read buf = self.infile.read(length) File ""/usr/lib64/python2.6/site-packages/duplicity/diffdir.py"", line 384, in read buf = self.infile.read(length) IOError: [Errno 22] Invalid argument Can we get a fix for this? Probably just ignoring these files makes sense. ```",6 118019686,2011-03-31 11:28:27.648,Freeze when remote host has closed the connection (FTP backup) (lp:#746377),"[Original report](https://bugs.launchpad.net/bugs/746377) created by **Anakin Starkiller (sunrider)** ``` I use duplicity to backup a large amount of data to a FTP server (the one provided by my ISP, so I don't have any control over this server). After a while, I've got an error because the remote host has closed the connection... -------------------------------------------------------------------- /duplicity-EaIobp-tempdir/mktemp-bSUbxB-13' 'duplicity- full.20110330T144033Z.vol51.difftar.gpg'' failed with code 3 (attempt #1) Error is: Lost data connection to remote host after 12025856 bytes had been sent: Broken pipe. Remote host has closed the connection. ncftpput duplicity-full.20110330T144033Z.vol51.difftar.gpg: socket write error. ^C Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1249, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1242, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1215, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 417, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 316, in write_multivol (tdp, dest_filename))) File ""/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 315, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest_filename), File ""/usr/bin/duplicity"", line 241, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.7/site-packages/duplicity/backends/ftpbackend.py"", line 98, in put self.run_command_persist(commandline) File ""/usr/lib/python2.7/site-packages/duplicity/backend.py"", line 382, in run_command_persist return self.subprocess_popen_persist(commandline) File ""/usr/lib/python2.7/site-packages/duplicity/backend.py"", line 440, in subprocess_popen_persist result, stdout, stderr = self._subprocess_popen(commandline) File ""/usr/lib/python2.7/site-packages/duplicity/backend.py"", line 403, in _subprocess_popen stdout, stderr = p.communicate() File ""/usr/lib/python2.7/subprocess.py"", line 740, in communicate return self._communicate(input) File ""/usr/lib/python2.7/subprocess.py"", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File ""/usr/lib/python2.7/subprocess.py"", line 1311, in _communicate_with_poll ready = poller.poll() KeyboardInterrupt ------------------------ As you can see, I have to press control-C to exit the program. Fortunately, my volsize is small enough (15Mb), so I just have to launch duplicity again, and it will resume the transfer almost where it was stopped. So the question is : what can I do to automate this ? Or maybe, duplicity should not freeze at all in the first place... ``` Original tags: backup closed connection duplicity freeze ftp host remote retries",6 118022785,2011-03-31 08:40:55.232,SSLError while listing large WebDAV directory (lp:#746292),"[Original report](https://bugs.launchpad.net/bugs/746292) created by **Thomas Tanner (tomtanner)** ``` I'm using the WebDAV backend for the backup to a storage space provided by large German hosting provider. Starting with a empty directory on the server everything works fine until after about 54 incremental backups I always get the following error (Ubuntu 10.04): Start duply v1.5.2.3, time is 2011-03-31 10:13:22. Using profile '/etc/duply/hourly'. Using installed duplicity version 0.6.12, gpg 1.4.10 (Home: ~/.gnupg) Test - Encryption with key 1454AE98 (OK) Test - Decryption with key 1454AE98 (OK) Test - Compare Original w/ Decryption (OK) Cleanup - Delete '/tmp/duply.1007.1301559202_*'(OK) --- Start running command INCR at 10:13:22.292 --- Using archive dir: /root/.cache/duplicity/duply_hourly Using backup name: duply_hourly Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Using WebDAV host webdav.hidrive.strato.com Using WebDAV directory /users/me/hourly/ Using WebDAV protocol http Reading globbing filelist /etc/duply/hourly/exclude Main action: inc ================================================================================ duplicity 0.6.12 (March 08, 2011) Args: /usr/bin/duplicity incr --name duply_hourly --encrypt-key 1454AE98 --sign-key 1454AE98 --verbosity 9 --num-retries 5 --exclude-globbing- filelist /etc/duply/hourly/exclude / webdavs://me@webdav.hidrive.strato.com/users/me/hourly Linux myhost 2.6.33.7-vs2.3.0.36.30.4 #1 SMP Tue Nov 16 08:24:31 UTC 2010 x86_64 /usr/bin/python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] ================================================================================ Using temporary directory /tmp/duplicity-JDYTLm-tempdir Registering (mkstemp) temporary file /tmp/duplicity-JDYTLm-tempdir/mkstemp- _cV4ZB-1 Temp has 37880573952 available, backup will use approx 34078720. Listing directory /users/me/hourly/ on WebDAV server WebDAV PROPFIND attempt #1 failed: 200 Listing directory /users/me/hourly/ on WebDAV server Removing still remembered temporary file /tmp/duplicity-JDYTLm- tempdir/mkstemp-_cV4ZB-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1261, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1254, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1155, in main sync_archive() File ""/usr/bin/duplicity"", line 931, in sync_archive remlist = globals.backend.list() File ""/usr/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 165, in list response = self.request(""PROPFIND"", self.directory, self.listbody) File ""/usr/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 107, in request response = self.conn.getresponse() File ""/usr/lib/python2.6/httplib.py"", line 986, in getresponse response.begin() File ""/usr/lib/python2.6/httplib.py"", line 391, in begin version, status, reason = self._read_status() File ""/usr/lib/python2.6/httplib.py"", line 349, in _read_status line = self.fp.readline() File ""/usr/lib/python2.6/socket.py"", line 397, in readline data = recv(1) File ""/usr/lib/python2.6/ssl.py"", line 96, in self.recv = lambda buflen=1024, flags=0: SSLSocket.recv(self, buflen, flags) File ""/usr/lib/python2.6/ssl.py"", line 222, in recv raise x SSLError: The read operation timed out 10:13:52.695 Task 'INCR' failed with exit code '30'. ---------------------------------------------------------- if I empty the directory, perform a full backup and then a incremental backup the output is: Listing directory /users/me/hourly/ on WebDAV server WebDAV PROPFIND attempt #1 failed: 200 Listing directory /users/me/hourly/ on WebDAV server /users/me/hourly/ 2011-03-31T08:18:49Z Thu, 31 Mar 2011 08:18:49 GMT ""5-5-49fc2f16efdb0"" HTTP/1.1 200 OK /users/me/hourly/duplicity- full.20110331T081843Z.vol1.difftar.gpg 2011-03-31T08:18:49Z 4429620 Thu, 31 Mar 2011 08:18:49 GMT ""179-439734-49fc2f1693948"" F HTTP/1.1 200 OK ---------------------------------------------------------- Is this a bug on the server side? is there a workaround? ```",6 118019680,2011-03-27 19:49:12.014,reference backups by unique IDs (not url or name) (lp:#743820),"[Original report](https://bugs.launchpad.net/bugs/743820) created by **ceg (ceg)** ``` The idea is to store the UUID in the backup files, and use it for the name of the cache dir. Then the correct cache should always be identifyable. Backup --name option would be stored with the backup files and in the cache. Time of last backups, URLs etc. could be cached for integrity, informational or convenience reasons. If duplicity full/incremental/restore is called only with the --name (or an --ID) option, duplicity could use the cached URL(s) and settings. * The cache name would not contain a hash of the url, and stay valid if url changes. * Named backups could be renamed without breaking older copies. ... ```",14 118019679,2011-03-27 16:50:25.023,New feature: Exclude files based on size (lp:#743725),"[Original report](https://bugs.launchpad.net/bugs/743725) created by **jlh (jlherren)** ``` I'd really like to exclude large files from my backups. I sometimes have huge SQL dumps laying around and they take hours (if not days) to transfer to my backup host. Something like --exclude-size or similar. Don't know if an --include-size counterpart makes any sense. ```",8 118019675,2011-03-07 08:54:27.639,remove-all-but-n-full does not check for successfull backup (lp:#730497),"[Original report](https://bugs.launchpad.net/bugs/730497) created by **Stefan Voelkel (stefan-voelkel)** ``` duplicity 0.6.09 Python 2.5.2 Debian Lenny Target: Linxux, via SSH, no encryption My backup script contains: duplicity --exclude-if-present .duplicity.nobackup --include ... --exclude '**' --archive-dir ... / ssh://...//mnt/duplicity/ --no- encryption --full-if-older-than 180D duplicity remove-all-but-n-full 1 ssh://...//mnt/duplicity/ --no- encryption --force The last full backup was more than 180 days old, so duplicity created a new full backup. However the filesystem was too small, so the new full backup was incomplete. It seems that the call to remove-all-but-n-full does not check, if the last n full backups were successfull, as it removed the old (complete) full backup, and kept the new (broken) full backup: $ ls -s total 529732 0 duplicity-full.20110306T114016Z.manifest 25612 duplicity-full.20110306T114016Z.vol10.difftar.gz ... 17808 duplicity-full.20110306T114016Z.vol21.difftar.gz 0 duplicity-full.20110306T114016Z.vol22.difftar.gz 0 duplicity-full.20110306T114016Z.vol23.difftar.gz 25588 duplicity-full.20110306T114016Z.vol2.difftar.gz ... 25592 duplicity-full.20110306T114016Z.vol9.difftar.gz 0 duplicity-full-signatures.20110306T114016Z.sigtar.gz ```",20 118019672,2011-02-28 22:14:21.762,UnboundLocalError: local variable 'document' referenced before assignment (lp:#726823),"[Original report](https://bugs.launchpad.net/bugs/726823) created by **Tobias Wichtrey (mail-tobias-wichtrey)** ``` When I invoke duplicity with duplicity -v9 localdir/ webdav://username:password@dav.server.com/remotedir/ (with details filled in, of course), then I always get the following error: ------ Linux dhcppc0 2.6.34.7-0.7-desktop #1 SMP PREEMPT 2010-12-13 11:13:53 +0100 x86_64 x86_64 /usr/bin/python 2.6.5 (r265:79063, Oct 28 2010, 20:56:23) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] ================================================================================ Using temporary directory /tmp/duplicity-P_wNM6-tempdir Registering (mkstemp) temporary file /tmp/duplicity-P_wNM6-tempdir/mkstemp- WlUnpu-1 Temp has 8678006784 available, backup will use approx 34078720. Listing directory /remotedir/ on WebDAV server WebDAV PROPFIND attempt #1 failed: 405 Method Not Allowed Listing directory /remotedir/ on WebDAV server WebDAV PROPFIND attempt #2 failed: 405 Method Not Allowed Listing directory /remotedir/ on WebDAV server WebDAV PROPFIND attempt #3 failed: 405 Method Not Allowed Listing directory /remotedir/ on WebDAV server WebDAV PROPFIND attempt #4 failed: 405 Method Not Allowed Listing directory /remotedir/ on WebDAV server WebDAV PROPFIND attempt #5 failed: 405 Method Not Allowed Removing still remembered temporary file /tmp/duplicity- P_wNM6-tempdir/mkstemp-WlUnpu-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1239, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1232, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1133, in main sync_archive() File ""/usr/bin/duplicity"", line 910, in sync_archive remlist = globals.backend.list() File ""/usr/lib64/python2.6/site- packages/duplicity/backends/webdavbackend.py"", line 181, in list log.Info(""%s"" % (document,)) UnboundLocalError: local variable 'document' referenced before assignment ------ I am using duplicity 0.6.08b with python 2.6.5 on an OpenSuSE 11.3 system. Maybe, it is because my username contains an '@'. I tried writing '\@' and putting the username in ""s, but the same error occcurs. It is also the same with duplicity 0.6.11. Cheers Tobias ```",22 118022826,2011-02-25 17:13:23.062,"dejadup crashes cant take backup displays ""Failed with an unknown error."" (lp:#725117)","[Original report](https://bugs.launchpad.net/bugs/725117) created by **Rakesh Jain (rakeshchhabra+launchpad)** ``` $ dpkg-query -W deja-dup duplicity deja-dup 16.1.1-0ubuntu1 duplicity 0.6.10-0ubuntu1 $ lsb_release -d Description: Ubuntu 10.10 $ ```",6 118019283,2011-02-19 01:48:43.106,Manifest needs to list all files in each difftar (lp:#721618),"[Original report](https://bugs.launchpad.net/bugs/721618) created by **nemoinis (nemoinis)** ``` I'm trying to restore a 0.5KB text file from my duplicity backup (stored on the net in my website). I expected duplicity to only download the difftar volume containing the file, but it looks like it's downloading everything, all 34GB of it! In the attached log I've stopped it after volume 22, but by that time it already had downloaded (and discarded) over 1GB of data. I can't afford to waste bandwidth and time. Am I missing the ""bandwidth efficient"" switch? This is using duplicity 0.6.11 on Ubuntu 10.04. I've attached the log (up to when I nuked it). The command line I used is: duplicity restore -t now --verbosity 5 --file-to-restore Documents/MYFILE.txt --archive-dir /path/to/localarchive/ scp://albsync/sync/backup /home/me/tmp/restored_file.txt ```",8 118019671,2011-02-07 22:37:53.415,Duplicity restore fails with ssl socket timeout (lp:#714880),"[Original report](https://bugs.launchpad.net/bugs/714880) created by **Rich Davis (rdavis)** ``` I'm attempting to restore a backup at Rackspace CloudFiles. It consists of 153 volumes in Rackspace Cloudfiles and fails on different volumes with timeouts with ""BackendException: Error downloading 'backup_db1/duplicity- full.20110207T185831Z.vol13.difftar.gz'"" This example indicates failure on volume 13, but I've had it fail on volumes 1, 4, etc. Details below. -------- START DT-CF-BACKUP SCRIPT -------- Using archive dir: /root/.cache/duplicity/eb2b19f4b4b8e487e7e198a9559285e3 Using backup name: eb2b19f4b4b8e487e7e198a9559285e3 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.11 (November 20, 2010) Args: /usr/bin/duplicity restore --full-if-older-than 14D -v9 --no- encryption cf+http://backup /var/foo/ Linux myserver.com 2.6.18-194.26.1.el5 #1 SMP Fri Oct 29 14:21:16 EDT 2010 x86_64 x86_64 /usr/bin/python 2.4.3 (#1, Nov 3 2010, 12:52:40) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] ================================================================================ Using temporary directory /tmp/duplicity-c_nD6W-tempdir Registering (mkstemp) temporary file /tmp/duplicity- c_nD6W-tempdir/mkstemp-7c5Ao4-1 Temp has 1839988736 available, backup will use approx 34078720. Listed container 'backup' Local and Remote metadata are synchronized, no sync needed. Listed container 'backup' 153 files exist on backend 2 files exist in cache duplicity 0.6.11 Python 2.4.3 Red Hat Enterprise Linux Server release 5.5 (Tikanga) ```",6 118019655,2011-02-06 22:38:37.672,"""cleanup"" does not work for WebDAV (lp:#714299)","[Original report](https://bugs.launchpad.net/bugs/714299) created by **astronic (bugreports-tittel)** ``` Duplicity version: 0.6.11 Python version: 2.6.5 OS: openSUSE 11.3 Target file system: Linux When duplicity got interrupted during operation, naturally it leaves some orphaned files, which I would like to get removed via ""--force cleanup"". However, with WebDAV this doesn't work. The error is as follow: tittel@earth:~/backuptest$ duplicity --force -v9 cleanup webdavs://username:XXX@www.XXXX.net/backup/earth/ Using archive dir: /home/tittel/.cache/duplicity/040dedfeac582892d57cdae29d358bcf Using backup name: 040dedfeac582892d57cdae29d358bcf Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Using WebDAV host www.XXXX.net Using WebDAV directory /backup/earth/ Using WebDAV protocol http Main action: cleanup PASSPHRASE variable not set, asking user. GnuPG passphrase: ================================================================================ duplicity 0.6.11 (November 20, 2010) Args: /usr/bin/duplicity --force -v9 cleanup webdavs://username:XXX@www.XXXX.net/backup/earth/ Linux earth 2.6.34.7-0.7-desktop #1 SMP PREEMPT 2010-12-13 11:13:53 +0100 x86_64 x86_64 /usr/bin/python 2.6.5 (r265:79063, Oct 28 2010, 20:56:23) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] ================================================================================ Listing directory /backup/earth/ on WebDAV server WebDAV PROPFIND attempt #1 failed: 200 Listing directory /backup/earth/ on WebDAV server /backup/earth/ 2011-02-06T22:22:43Z Sun, 06 Feb 2011 22:22:43 GMT ""e00158-1000-49ba48dcff2c0"" httpd/unix-directory HTTP/1.1 200 OK /backup/earth/duplicity- full.20110206T221605Z.vol1.difftar.gpg 2011-02-06T22:16:12Z 267137 Sun, 06 Feb 2011 22:16:12 GMT ""e00061-41381-49ba47681c300"" F HTTP/1.1 200 OK /backup/earth/duplicity-full- signatures.20110206T221605Z.sigtar.gpg 2011-02-06T22:16:13Z 7977 Sun, 06 Feb 2011 22:16:13 GMT ""e00062-1f29-49ba476910540"" F HTTP/1.1 200 OK /backup/earth/duplicity-full.20110206T221605Z.manifest.gpg 2011-02-06T22:16:13Z 195 Sun, 06 Feb 2011 22:16:13 GMT ""e00064-c3-49ba476910540"" F HTTP/1.1 200 OK /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.vol1.difftar.gpg 2011-02-06T22:16:48Z 1155791 Sun, 06 Feb 2011 22:16:48 GMT ""e00065-11a2cf-49ba478a71400"" F HTTP/1.1 200 OK /backup/earth/duplicity-new- signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg 2011-02-06T22:16:49Z 35220 Sun, 06 Feb 2011 22:16:49 GMT ""e00067-8994-49ba478b65640"" F HTTP/1.1 200 OK /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.manifest.gpg 2011-02-06T22:16:49Z 202 Sun, 06 Feb 2011 22:16:49 GMT ""e00068-ca-49ba478b65640"" F HTTP/1.1 200 OK /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.vol1.difftar.gpg [...] (end of first 200 lines) [...] [...] (start of last 200 lines) [...] /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.manifest.gpg 2011-02-06T22:18:05Z 202 Sun, 06 Feb 2011 22:18:05 GMT ""e0006e-ca-49ba47d3e0140"" F HTTP/1.1 200 OK /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.vol1.difftar.gpg 2011-02-06T22:18:51Z 1155633 Sun, 06 Feb 2011 22:18:51 GMT ""e00072-11a231-49ba47ffbe8c0"" F HTTP/1.1 200 OK /backup/earth/duplicity-new- signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg 2011-02-06T22:18:52Z 35270 Sun, 06 Feb 2011 22:18:52 GMT ""e00076-89c6-49ba4800b2b00"" F HTTP/1.1 200 OK /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.manifest.gpg 2011-02-06T22:18:53Z 191 Sun, 06 Feb 2011 22:18:53 GMT ""e00081-bf-49ba4801a6d40"" F HTTP/1.1 200 OK webdav path decoding and translation: /backup/earth/ -> /backup/earth/ webdav path decoding and translation: /backup/earth/duplicity- full.20110206T221605Z.vol1.difftar.gpg -> /backup/earth/duplicity- full.20110206T221605Z.vol1.difftar.gpg webdav path decoding and translation: /backup/earth/duplicity-full- signatures.20110206T221605Z.sigtar.gpg -> /backup/earth/duplicity-full- signatures.20110206T221605Z.sigtar.gpg webdav path decoding and translation: /backup/earth/duplicity- full.20110206T221605Z.manifest.gpg -> /backup/earth/duplicity- full.20110206T221605Z.manifest.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.vol1.difftar.gpg -> /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.vol1.difftar.gpg webdav path decoding and translation: /backup/earth/duplicity-new- signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg -> /backup/earth/duplicity-new- signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.manifest.gpg -> /backup/earth/duplicity- inc.20110206T221605Z.to.20110206T221633Z.manifest.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.vol1.difftar.gpg -> /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.vol1.difftar.gpg webdav path decoding and translation: /backup/earth/duplicity-new- signatures.20110206T221633Z.to.20110206T221748Z.sigtar.gpg -> /backup/earth/duplicity-new- signatures.20110206T221633Z.to.20110206T221748Z.sigtar.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.manifest.gpg -> /backup/earth/duplicity- inc.20110206T221633Z.to.20110206T221748Z.manifest.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.vol1.difftar.gpg -> /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.vol1.difftar.gpg webdav path decoding and translation: /backup/earth/duplicity-new- signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg -> /backup/earth/duplicity-new- signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg webdav path decoding and translation: /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.manifest.gpg -> /backup/earth/duplicity- inc.20110206T221748Z.to.20110206T221836Z.manifest.gpg 12 files exist on backend 11 files exist in cache Extracting backup chains from list of files: ['duplicity-new- signatures.20110206T221633Z.to.20110206T221657Z.sigtar.part', 'duplicity- new-signatures.20110206T221657Z.to.20110206T221748Z.sigtar.part', 'duplicity-inc.20110206T221657Z.to.20110206T221748Z.manifest.part', u'duplicity-full.20110206T221605Z.vol1.difftar.gpg', u'duplicity-full- signatures.20110206T221605Z.sigtar.gpg', u'duplicity- full.20110206T221605Z.manifest.gpg', u'duplicity- inc.20110206T221605Z.to.20110206T221633Z.vol1.difftar.gpg', u'duplicity- new-signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg', u'duplicity-inc.20110206T221605Z.to.20110206T221633Z.manifest.gpg', u'duplicity-inc.20110206T221633Z.to.20110206T221748Z.vol1.difftar.gpg', u'duplicity-new- signatures.20110206T221633Z.to.20110206T221748Z.sigtar.gpg', u'duplicity- inc.20110206T221633Z.to.20110206T221748Z.manifest.gpg', u'duplicity- inc.20110206T221748Z.to.20110206T221836Z.vol1.difftar.gpg', u'duplicity- new-signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg', u'duplicity-inc.20110206T221748Z.to.20110206T221836Z.manifest.gpg'] File duplicity-new- signatures.20110206T221633Z.to.20110206T221657Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110206T221633Z.to.20110206T221657Z.sigtar.part' File duplicity-new- signatures.20110206T221657Z.to.20110206T221748Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110206T221657Z.to.20110206T221748Z.sigtar.part' File duplicity-inc.20110206T221657Z.to.20110206T221748Z.manifest.part is not part of a known set; creating new set File duplicity-full.20110206T221605Z.vol1.difftar.gpg is not part of a known set; creating new set File duplicity-full-signatures.20110206T221605Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20110206T221605Z.sigtar.gpg' File duplicity-full.20110206T221605Z.manifest.gpg is part of known set File duplicity-inc.20110206T221605Z.to.20110206T221633Z.vol1.difftar.gpg is not part of a known set; creating new set File duplicity-new- signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110206T221605Z.to.20110206T221633Z.sigtar.gpg' File duplicity-inc.20110206T221605Z.to.20110206T221633Z.manifest.gpg is part of known set File duplicity-inc.20110206T221633Z.to.20110206T221748Z.vol1.difftar.gpg is not part of a known set; creating new set File duplicity-new- signatures.20110206T221633Z.to.20110206T221748Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110206T221633Z.to.20110206T221748Z.sigtar.gpg' File duplicity-inc.20110206T221633Z.to.20110206T221748Z.manifest.gpg is part of known set File duplicity-inc.20110206T221748Z.to.20110206T221836Z.vol1.difftar.gpg is not part of a known set; creating new set File duplicity-new- signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20110206T221748Z.to.20110206T221836Z.sigtar.gpg' File duplicity-inc.20110206T221748Z.to.20110206T221836Z.manifest.gpg is part of known set Found backup chain [Sun Feb 6 23:16:05 2011]-[Sun Feb 6 23:16:05 2011] Added incremental Backupset (start_time: Sun Feb 6 23:16:05 2011 / end_time: Sun Feb 6 23:16:33 2011) Added set Sun Feb 6 23:16:33 2011 to pre-existing chain [Sun Feb 6 23:16:05 2011]-[Sun Feb 6 23:16:33 2011] Added incremental Backupset (start_time: Sun Feb 6 23:16:33 2011 / end_time: Sun Feb 6 23:17:48 2011) Added set Sun Feb 6 23:17:48 2011 to pre-existing chain [Sun Feb 6 23:16:05 2011]-[Sun Feb 6 23:17:48 2011] Ignoring incremental Backupset (start_time: Sun Feb 6 23:16:57 2011; needed: Sun Feb 6 23:17:48 2011) Found orphaned set Sun Feb 6 23:17:48 2011 Added incremental Backupset (start_time: Sun Feb 6 23:17:48 2011 / end_time: Sun Feb 6 23:18:36 2011) Added set Sun Feb 6 23:18:36 2011 to pre-existing chain [Sun Feb 6 23:16:05 2011]-[Sun Feb 6 23:18:36 2011] Warning, found the following local orphaned signature files: duplicity-new-signatures.20110206T221633Z.to.20110206T221657Z.sigtar.part duplicity-new-signatures.20110206T221657Z.to.20110206T221748Z.sigtar.part Warning, found the following orphaned backup file: [duplicity-inc.20110206T221657Z.to.20110206T221748Z.manifest.part] Last full backup date: Sun Feb 6 23:16:05 2011 Collection Status ----------------- Connecting with backend: WebDAVBackend Archive dir: /home/tittel/.cache/duplicity/040dedfeac582892d57cdae29d358bcf Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sun Feb 6 23:16:05 2011 Chain end time: Sun Feb 6 23:18:36 2011 Number of contained backup sets: 4 Total number of contained volumes: 4  Type of backup set: Time: Num volumes:                 Full Sun Feb 6 23:16:05 2011 1          Incremental Sun Feb 6 23:16:33 2011 1          Incremental Sun Feb 6 23:17:48 2011 1          Incremental Sun Feb 6 23:18:36 2011 1 ------------------------- Also found 1 backup set not part of any chain, and 0 incomplete backup sets. These may be deleted by running duplicity with the ""cleanup"" command. Deleting these files from backend: duplicity-new-signatures.20110206T221633Z.to.20110206T221657Z.sigtar.part duplicity-new-signatures.20110206T221657Z.to.20110206T221748Z.sigtar.part duplicity-inc.20110206T221657Z.to.20110206T221748Z.manifest.part Deleting /backup/earth/duplicity- inc.20110206T221657Z.to.20110206T221748Z.manifest.part from WebDAV server WebDAV DELETE attempt #1 failed: 404 Not Found Deleting /backup/earth/duplicity- inc.20110206T221657Z.to.20110206T221748Z.manifest.part from WebDAV server Using temporary directory /tmp/duplicity-dk08u1-tempdir Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1245, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1238, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1200, in main     cleanup(col_stats)   File ""/usr/bin/duplicity"", line 699, in cleanup     col_stats.backend.delete(ext_remote)   File ""/usr/lib64/python2.6/site- packages/duplicity/backends/webdavbackend.py"", line 266, in delete     response = self.request(""DELETE"", url)   File ""/usr/lib64/python2.6/site- packages/duplicity/backends/webdavbackend.py"", line 107, in request     response = self.conn.getresponse()   File ""/usr/lib64/python2.6/httplib.py"", line 976, in getresponse     raise ResponseNotReady() ResponseNotReady ```",14 118019650,2011-02-06 21:58:54.467,Reply to request for error correction ideas for new format (lp:#714278),"[Original report](https://bugs.launchpad.net/bugs/714278) created by **Stuart Gathman (stuart-gathman)** ``` The web page describing the tar replacement format asks for ideas to improve error recovery. Here is an important one that I learned from designing a file system that has no critical blocks (can survive corruption or loss anywhere with limited data loss). Each file (including meta files such as index data) should be assigned an ""id"" within the backup. The block header for each block should include the file id, and perhaps an offset within the file. This way, even with loss of meta data, blocks belonging to the same file can be identified, and files still recovered in a ""lost+found"" directory. The pointers in the index are not sufficient, because you often don't know how many bytes have been ""skipped"" when recovering from errors. (And you may not have the index.) ```",6 118019645,2011-02-06 17:50:10.128,"Backup on a webdav server fails with ""SSLError: The read operation timed out"" (lp:#714175)","[Original report](https://bugs.launchpad.net/bugs/714175) created by **Lutz Niggl (lutz-niggl)** ``` Version 6.11 Python 2.7 Suse 11.3 webdavs backend Backing up to webdav.mediencenter.t-online.de Error ""SSLError: The read operation timed out"" repeatedly shows up after some 10-30MB. (I reduced volsize to 5MB). Restarting works and continues the backup. Once the full backup is on the server the incremental backups run without observed problems. Rgrds Lutz ```",22 118019602,2011-02-05 22:25:29.446,gpg: fatal: zlib inflate problem: incorrect data check (lp:#713832),"[Original report](https://bugs.launchpad.net/bugs/713832) created by **William Deninger (wdeninger)** ``` duplicity 0.6.11 Python 2.6.5 Linux SOUTH 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux This has been an issue for some time (pre 0.6.8b through 0.6.11). I create the incident by performing a full backup of a data repository path to a mounted drive on the same system. The failure is produced when attempting to verify, restore or file restore the specific file which fails. Repository backup call: nohup duplicity --encrypt-key ""8ABB1A41"" --sign-key ""8ABB1A41"" --verbosity 9 ""/media/MadDog/Archive/"" ""file:///media/Poodle/backup/media/MadDog/Archive/"" >> localbackup.log & No errors or warnings are issued during the backup phase: AsyncScheduler: running task synchronously (asynchronicity disabled) Writing /media/Poodle/backup/./media/Poodle/Archive.dup/Photo Archive/duplicity-full.20100902T170354Z.vol1147.difftar.gpg Deleting /tmp/duplicity-X5gfOu-tempdir/mktemp-CVCWXz-1148 Forgetting temporary file /tmp/duplicity-X5gfOu-tempdir/mktemp-CVCWXz-1148 AsyncScheduler: task completed successfully Processed volume 1147 Registering (mktemp) temporary file /tmp/duplicity-X5gfOu-tempdir/mktemp- RGVAhj-1149 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing /media/Poodle/backup/./media/Poodle/Archive.dup/Photo Archive/duplicity-full.20100902T170354Z.vol1148.difftar.gpg Deleting /tmp/duplicity-X5gfOu-tempdir/mktemp-RGVAhj-1149 Forgetting temporary file /tmp/duplicity-X5gfOu-tempdir/mktemp-RGVAhj-1149 AsyncScheduler: task completed successfully Processed volume 1148 Registering (mktemp) temporary file /tmp/duplicity-X5gfOu-tempdir/mktemp- Vk9ex8-1150 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing /media/Poodle/backup/./media/Poodle/Archive.dup/Photo Archive/duplicity-full.20100902T170354Z.vol1149.difftar.gpg Deleting /tmp/duplicity-X5gfOu-tempdir/mktemp-Vk9ex8-1150 Forgetting temporary file /tmp/duplicity-X5gfOu-tempdir/mktemp-Vk9ex8-1150 AsyncScheduler: task completed successfully Processed volume 1149 . . . File duplicity-full.20110205T061747Z.vol3671.difftar.gpg is part of known set Found backup chain [Fri Feb 4 22:17:47 2011]-[Fri Feb 4 22:17:47 2011] --------------[ Backup Statistics ]-------------- StartTime 1296886667.37 (Fri Feb 4 22:17:47 2011) EndTime 1296901383.13 (Sat Feb 5 02:23:03 2011) ElapsedTime 14715.76 (4 hours 5 minutes 15.76 seconds) SourceFiles 50598 SourceFileSize 129262183781 (120 GB) NewFiles 50598 NewFileSize 129262183781 (120 GB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 50598 RawDeltaSize 129247782245 (120 GB) TotalDestinationSizeChange 125396358454 (117 GB) Errors 0 ------------------------------------------------- Removing still remembered temporary file /tmp/duplicity- ll_k8Z-tempdir/mkstemp-fl_3D4-1 During verify, restore and file restore, the following occurs which terminates the restore process: nohup duplicity restore --encrypt-key 8ABB1A41 --sign-key 8ABB1A41 --ignore-errors --verbosity 9 ""file:///media/Poodle/backup/media/MadDog/Archive/"" ""/media/Poodle/restore/"" > localrestore.log & OR duplicity --encrypt-key 8ABB1A41 --sign-key 8ABB1A41 --verbosity debug --file-to-restore ""Photo Archive/1995/19950817 Fort Lauderdale 08.jpg"" ""file:///media/Poodle/backup/media/MadDog/Archive/"" ""/media/Poodle/restore2.jpg/"" > duplicity-failure.log where the file ""Archive/1995/19950817 Fort Lauderdale 08.jpg"" is the failing restore. Running in 'ignore errors' mode due to --ignore-errors; please re-consider if this was not intended Using archive dir: /root/.cache/duplicity/0543e0fc57cd727701c014cdcfc1718f Using backup name: 0543e0fc57cd727701c014cdcfc1718f Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.11 (November 20, 2010) Args: /usr/local/bin/duplicity restore --encrypt-key 8ABB1A41 --sign-key 8ABB1A41 --ignore-errors --verbosity 9 file:///media/Poodle/backup//media/MadDog/Archive// /media/Poodle/restore/ Linux SOUTH 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 /usr/bin/python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] ================================================================================ Using temporary directory /tmp/duplicity-ku5lKg-tempdir Registering (mkstemp) temporary file /tmp/duplicity-ku5lKg-tempdir/mkstemp- MP77AK-1 Temp has 136185794560 available, backup will use approx 34078720. Synchronizing remote metadata to local cache... Copying duplicity-full-signatures.20110205T061747Z.sigtar to local cache. Registering (mktemp) temporary file /tmp/duplicity-ku5lKg-tempdir/mktemp- stjUrp-2 Deleting /tmp/duplicity-ku5lKg-tempdir/mktemp-stjUrp-2 Forgetting temporary file /tmp/duplicity-ku5lKg-tempdir/mktemp-stjUrp-2 Copying duplicity-full.20110205T061747Z.manifest to local cache. Registering (mktemp) temporary file /tmp/duplicity-ku5lKg-tempdir/mktemp- orBhm3-3 Deleting /tmp/duplicity-ku5lKg-tempdir/mktemp-orBhm3-3 Forgetting temporary file /tmp/duplicity-ku5lKg-tempdir/mktemp-orBhm3-3 4776 files exist on backend 2 files exist in cache . . . Writing Photo Archive/1995/19950817 Fort Lauderdale 07.jpg of type reg Writing Photo Archive/1995/19950817 Fort Lauderdale 08.jpg of type reg Removing still remembered temporary file /tmp/duplicity-ku5lKg- tempdir/mkstemp-MP77AK-1 Removing still remembered temporary file /tmp/duplicity-ku5lKg- tempdir/mktemp-VgFZYz-585 GPG error detail: Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1245, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1238, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1192, in main restore(col_stats) File ""/usr/local/bin/duplicity"", line 539, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 522, in Write_ROPaths ITR( ropath.index, ropath ) File ""/usr/local/lib/python2.6/dist-packages/duplicity/lazy.py"", line 335, in __call__ last_branch.fast_process, args) File ""/usr/local/lib/python2.6/dist-packages/duplicity/robust.py"", line 37, in check_common_error return function(*args) File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 575, in fast_process ropath.copy( self.base_path.new_index( index ) ) File ""/usr/local/lib/python2.6/dist-packages/duplicity/path.py"", line 416, in copy other.writefileobj(self.open(""rb"")) File ""/usr/local/lib/python2.6/dist-packages/duplicity/path.py"", line 591, in writefileobj buf = fin.read(_copy_blocksize) File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 200, in read if not self.addtobuffer(): File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 225, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 332, in next self.set_tarfile() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 320, in set_tarfile assert not self.current_fp.close() File ""/usr/local/lib/python2.6/dist-packages/duplicity/dup_temp.py"", line 210, in close assert not self.fileobj.close() File ""/usr/local/lib/python2.6/dist-packages/duplicity/gpg.py"", line 198, in close self.gpg_failed() File ""/usr/local/lib/python2.6/dist-packages/duplicity/gpg.py"", line 165, in gpg_failed raise GPGError, msg GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: encrypted with 2048-bit RSA key, ID 73B4F60A, created 2010-09-02 ""Firstname Lastname(Duplicity key) "" gpg: fatal: zlib inflate problem: incorrect data check secmem usage: 2624/5664 bytes in 6/16 blocks of pool 5856/32768 ===== End GnuPG log ===== I am willing to provide access to the appropriate engineer if required. The bug has delayed me over a year from using Duplicity, but now that Mozy is jacking up their rates time to resolution is more critical. Thanks, -W ```",14 118018819,2011-01-30 15:20:39.527,duplicity crashes with tmp files when gpg2 installed instead of gpg (lp:#710198),"[Original report](https://bugs.launchpad.net/bugs/710198) created by **pierre (mkzuot)** ``` Hello, I use duplicity on my computer, and wished to use it on a Qnap NAS. duplicity was packaged for ipkg system for this hardware, and that is a good idea ! So, I import my GPG test keys and scripts, but duplicity crash with tmp directorys. I checked free space, permissions, and there is no problem. I run the script as admin on the NAS. Below is the -v9 trace: admin@JD pierretest # ./RemoteBackup.sh 2011-01-30_16:04:08: Backup for local filesystem started 2011-01-30_16:04:08: ... removing old backups Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No old backup sets found, nothing deleted. 2011-01-30_16:04:09: ... backing up filesystem Using archive dir: /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8 Using backup name: 450b79617daa87d1ac397876cf9eb4c8 Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.botobackend Succeeded Main action: full ================================================================================ duplicity 0.6.11 (November 20, 2010) Args: /opt/bin/duplicity full -v9 --encrypt-key=7B4DD872 --sign- key=7B4DD872 --volsize=20 --tempdir=/share/MD0_DATA/pierretest/temp /share/MD0_DATA/pierretest/animaux_fond_ecran file:///share/MD0_DATA/pierre3home/test Linux JD 2.6.33.2 #1 Wed Jan 5 02:06:35 CST 2011 armv5tel /opt/bin/python2.6 2.6.6 (r266:84292, Nov 29 2010, 23:41:28) [GCC 4.2.3] ================================================================================ Using temporary directory /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL- tempdir Registering (mkstemp) temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL-tempdir/mkstemp-sg5M2u-1 Temp has 945400791040 available, backup will use approx 27262976. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Using temporary directory /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-A1kupI- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-A1kupI- tempdir/mktemp-UVriHK-1 Using temporary directory /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-N8zqdB- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-N8zqdB- tempdir/mktemp-vd5hSf-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL-tempdir/mktemp-nvwoDJ-2 Selecting /share/MD0_DATA/pierretest/animaux_fond_ecran Comparing () and None Getting delta of (() /share/MD0_DATA/pierretest/animaux_fond_ecran dir) and None A . Removing still remembered temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL-tempdir/mkstemp-sg5M2u-1 Cleanup of temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL- tempdir/mkstemp-sg5M2u-1 failed Removing still remembered temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL-tempdir/mktemp-nvwoDJ-2 Cleanup of temporary file /share/MD0_DATA/pierretest/temp/duplicity-GMSDaL- tempdir/mktemp-nvwoDJ-2 failed Cleanup of temporary directory /share/MD0_DATA/pierretest/temp/duplicity- GMSDaL-tempdir failed - this is probably a bug. Traceback (most recent call last): File ""/opt/bin/duplicity"", line 1245, in with_tempdir(main) File ""/opt/bin/duplicity"", line 1238, in with_tempdir fn() File ""/opt/bin/duplicity"", line 1211, in main full_backup(col_stats) File ""/opt/bin/duplicity"", line 417, in full_backup globals.backend) File ""/opt/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 275, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 267, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/share/MD0_DATA/pierretest/temp/duplicity-GMSDaL-tempdir/mktemp-nvwoDJ-2' Removing still remembered temporary file /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-A1kupI- tempdir/mktemp-UVriHK-1 Removing still remembered temporary file /root/.cache/duplicity/450b79617daa87d1ac397876cf9eb4c8/duplicity-N8zqdB- tempdir/mktemp-vd5hSf-1 close failed in file object destructor: IOError: [Errno 32] Broken pipe 2011-01-30_16:04:10: Backup for local filesystem complete 2011-01-30_16:04:10: ------------------------------------ I export my GPG passphrase... I am just testing locally (later, I would like to do it on a ftp). I looked at previous bugs, I looked at -v9 trace on my computer, and I can't understand the problem. It is duplicity 0.6.11 (0.6.10 on my computer), Python 2.6.6 (same on my computer). Really, I don't understand. Where should I look? only difference is : Import of duplicity.backends.giobackend Failed: No module named gio but I don't know what I should install to solve this. Many thanks if someone could give me a clue to investigate... ```",64 118019598,2011-01-19 04:49:41.261,RFE: ability to use -verify as argument in backup command (as well as a restore command) (lp:#704760),"[Original report](https://bugs.launchpad.net/bugs/704760) created by **Aaron Whitehouse (aaron-whitehouse)** ``` Verifying backups is a very important part of the backup process to me. I have complicated duplicity backup commands set up to backup different lists of files on a host to different locations. I have found it very difficult to construct verify commands that can give me confidence that my files have all backed up correctly. It would be excellent if I could add a simple ""-verify"" option to my backup command and have it: (a) complete the backup as normal; and then (b) verify each as if I had managed to construct the correct verify command that mirrors my backup line (ie verifies each of the backed up files contained in the list of the backup command). I would imagine that such an option would be very popular. ```",6 118022565,2011-01-15 18:18:27.502,Provide better error message if gpg is not found (lp:#703345),"[Original report](https://bugs.launchpad.net/bugs/703345) created by **Tokuko (launchpad-net-tokuko)** ``` Currently duplicity complains about a temporary directory if gpg cannot be executed: x@yuna:~# duplicity --log-file duplicity.log /somepath webdavs://someone@someserver/somepath Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. Cleanup of temporary directory /tmp/duplicity-3F1Ehm-tempdir failed - this is probably a bug. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1245, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1238, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1216, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 417, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.6/site-packages/duplicity/gpg.py"", line 275, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/usr/lib/python2.6/site-packages/duplicity/gpg.py"", line 267, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity-3F1Ehm- tempdir/mktemp-p60v_2-2' Running these two commands on Solaris 11 Express solved the issue for me: x@yuna:~# cd /usr/bin/ x@yuna:/usr/bin# ln -s gpg2 gpg ```",14 118019201,2011-01-08 17:54:57.192,Try overwriting file with an 0-byte file on delete failure (workaround for Hetzner's backup backend) (lp:#700395),"[Original report](https://bugs.launchpad.net/bugs/700395) created by **Daniel Hahler (blueyed)** ``` I've run into an issue with the backup backend provided by Hetzner (a German provider): when the backup space is full, you cannot remove any files, but have to overwrite the file with a 0-byte file of the same name. Only then you can delete the original file. In duplicity the failure looks like this: Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 u123@u123.your-backup.de' (attempt #1) State = sftp, Before = 'Connected to u123.your-backup.de.' sftp command: 'cd ""moby/""' State = sftp, Before = 'cd ""moby/""' sftp command: 'rm ""duplicity-full.20110105T145659Z.vol1.difftar.gpg""' State = sftp, Before = 'rm ""duplicity- full.20110105T145659Z.vol1.difftar.gpg"" Removing /moby/duplicity-full.20110105T145659Z.vol1.difftar.gpg' Could not delete file in command='sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 u123@u123.your-backup.de' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 u123@u123.your-backup.de' failed (attempt #1) Replaying this via ""sftp"" looks like this: sftp> rm ""duplicity-full.20110105T145659Z.vol1.difftar.gpg"" Removing /moby/duplicity-full.20110105T145659Z.vol1.difftar.gpg Couldn't delete file: Failure There are two issues here: 1. The error ""Couldn't delete file: Failure"" should get logged. At least with verbosity=9, but also with something like 4 (default?!). 2. At least in case of this exact error (""Couldn't delete file: Failure"") it should try the following procedure: upload a 0-byte file, overwriting the file that should get deleted. If this works, try the ""rm"" again. ```",18 118019581,2011-01-02 20:47:22.654,Nanosecond timestamp support (lp:#696614),"[Original report](https://bugs.launchpad.net/bugs/696614) created by **Mechanical snail (replicator-snail)** ``` Duplicity currently has a timestamp granularity of 1 second, whereas current Linux filesystems support nanosecond resolution. Preserving high- resolution timestamps is important for many purposes (e.g. GNU make relies on the full nanosecond timestamps), besides the principle that a backup should preserve everything). Possibly relevant implementation details: GNU tar does support nanosecond timestamps, but you have to explicitly tell it to create POSIX-format tar files. Since GNU tar automatically detects the POSIX format when extracting, it should be possible to fix this without breaking backwards compatibility. ```",6 118019197,2010-12-19 19:19:12.421,Crash when destination volume has no free space (lp:#692305),"[Original report](https://bugs.launchpad.net/bugs/692305) created by **Milan Bouchet-Valat (nalimilan)** ``` I'm getting this trace when my destination directory is full (it's a NFS volume mounted to /mnt). duplicity is 0.6.10-0ubuntu1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1257, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1250, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1232, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 488, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 316, in write_multivol (tdp, dest_filename))) File ""/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 315, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest_filename), File ""/usr/bin/duplicity"", line 241, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.6/dist- packages/duplicity/backends/localbackend.py"", line 57, in put target_path.writefileobj(source_path.open(""rb"")) File ""/usr/lib/python2.6/dist-packages/duplicity/path.py"", line 595, in writefileobj if fin.close() or fout.close(): IOError: [Errno 5] Input/output error ```",12 118019578,2010-12-19 15:35:12.908,Should warn when backup destination doesn't exist (lp:#692237),"[Original report](https://bugs.launchpad.net/bugs/692237) created by **Milan Bouchet-Valat (nalimilan)** ``` If you forget to add a third / to the backup path, like file://mnt/somedir, duplicity doesn't warn, and later fails to find previous signatures. Same if you happen to pass a non-existent directory by mistake. Instead of saying: Fatal Error: Unable to start incremental backup. Old signatures not found and incremental specified and instead of starting a full backup when incremental was expected, it would be nicer to precise that backup directory doesn't exist. It's much clearer for users. For the specific case of file:// lacking a /, it would be even better to give the user a hint that the URI was likely mistyped if the directory turns out not to exist. (This is with duplicity 0.6.10-0ubuntu1.) ```",6 118019571,2010-12-16 18:14:26.397,backup via localbackend fails sometimes (lp:#691214),"[Original report](https://bugs.launchpad.net/bugs/691214) created by **Christian Ruppert (idl0r) (spooky85)** ``` backup via localbackend fails sometimes. I'm not sure if that is related but I always do a ""cleanup"" first. (cleanup/remove-older-than/remove-all- but-n-full/remove-all-inc-of-but-n-full) Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1245, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1238, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1220, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 488, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 316, in write_multivol (tdp, dest_filename))) File ""/usr/lib64/python2.6/site-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib64/python2.6/site-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 315, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest_filename), File ""/usr/bin/duplicity"", line 241, in put backend.put(tdp, dest_filename) File ""/usr/lib64/python2.6/site- packages/duplicity/backends/localbackend.py"", line 57, in put target_path.writefileobj(source_path.open(""rb"")) File ""/usr/lib64/python2.6/site-packages/duplicity/path.py"", line 589, in writefileobj fout = self.open(""wb"") File ""/usr/lib64/python2.6/site-packages/duplicity/path.py"", line 531, in open result = open(self.name, mode) IOError: [Errno 2] No such file or directory: '/mnt/backup/foo/duplicity- inc.20101214T174307Z.to.20101216T020302Z.vol1.difftar.gpg' ```",6 118019568,2010-12-12 07:06:59.109,wishlist: option to ignore corrupt backup set (lp:#689177),"[Original report](https://bugs.launchpad.net/bugs/689177) created by **az (az-debian)** ``` this is a forward of debian bug #606182, which lives here: http://bugs.debian.org/606182 the original reporter can't restore any of a number of full backups, because duplicity croaks on a broken/empty remote manifest file that belongs to one of the backup sets. (i think the empty files were caused by some vfs/disk cache or unmount issue, as the data is on a removable disk.) as far as i can tell, duplicity can't be instructed to disregard such a faulty backup set - which is what is needed in the long run. regards, az ```",14 118019566,2010-12-08 12:07:20.925,fix documentation: duplicity can't backup a remote folder (lp:#687291),"[Original report](https://bugs.launchpad.net/bugs/687291) created by **edso (ed.so)** ``` Manpage suggests: SYNOPSIS duplicity [options] source_directory target_url duplicity [options] source_url target_directory to restore I need --restore-too. In man it looks like I *can* backup remote dir over scp. I think it should be cleaner in man that duplicity can't backup remote dir. ```",6 118019559,2010-12-08 02:18:55.466,MemoryError occurs with large signature files. (lp:#686839),"[Original report](https://bugs.launchpad.net/bugs/686839) created by **Tom Eastman (tveastman)** ``` I just tried switching from using the SFTP backend to the WebDAV backend. But am unable to do so because the WebDAV backend crashes when trying to retrieve the signature file from my backup set. ================================================================================ duplicity 0.6.09 (July 25, 2010) Args: /usr/bin/duplicity -v5 --name XXXX_full --volsize 500 --exclude- globbing-filelist /etc/duplicity/ALL.exclude --include-globbing-filelist /etc/duplicity/ALL.include / webdavs://XXXXX@XXXX.XXXX/duplicity/XXXX.XXXX Linux XXXXXXXXXXX 2.6.26-2-xen-amd64 #1 SMP Thu Sep 16 16:32:15 UTC 2010 x86_64 /usr/bin/python 2.5.2 (r252:60911, Jan 24 2010, 17:44:40) [GCC 4.3.2] ================================================================================ Synchronizing remote metadata to local cache... Deleting local /home/XXXXXX/.cache/duplicity/XXXXXXXX_full/duplicity-full- signatures.20101110T214029Z.sigtar.gz (not authoritative at backend). Deleting local /home/XXXX/.cache/duplicity/XXXXXXXXXX/duplicity- full.20101110T214029Z.manifest (not authoritative at backend). Copying duplicity-full-signatures.20100917T120401Z.sigtar to local cache. Retrieving /duplicity/XXXXXXXXX/duplicity-full- signatures.20100917T120401Z.sigtar.gpg from WebDAV server Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1145, in main sync_archive() File ""/usr/bin/duplicity"", line 959, in sync_archive copy_to_local(fn) File ""/usr/bin/duplicity"", line 911, in copy_to_local fileobj = globals.backend.get_fileobj_read(rem_name) File ""/usr/lib/python2.5/site-packages/duplicity/backend.py"", line 463, in get_fileobj_read self.get(filename, tdp) File ""/usr/lib/python2.5/site- packages/duplicity/backends/webdavbackend.py"", line 235, in get target_file.write(response.read()) File ""/usr/lib/python2.5/httplib.py"", line 516, in read s = self._safe_read(self.length) File ""/usr/lib/python2.5/httplib.py"", line 607, in _safe_read return ''.join(s) MemoryError If I read this correctly, then 'target_file.write(response.read())' is trying to read the entire response into a string in memory before writing it to a file. This particular signature file is 1.2 gigabytes and duplicity thus exhausts all RAM, but in general I'll bet the signature file will ALWAYS be big enough that you don't want to deal with it as a string in RAM. My workaround for the moment will just have to be to return to the SFTP backend, which doesn't have this problem. ```",24 118019556,2010-11-29 13:56:44.637,no support for negative uid/gid (lp:#682667),"[Original report](https://bugs.launchpad.net/bugs/682667) created by **Jamus Jegier (jamus+launchpad)** ``` duplicity 0.6.10 Python 2.6.1 Mac OS X 10.6.5 Server Reproduction instructions: mkdir test touch test/test.txt chown nobody:nobody test/test.txt duplicity -v9 test file:///tmp/test2 Nobody has a UID/GID of -2. Log: Using archive dir: /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd Using backup name: a8e08b1a8a3d633f5cd35a8446eaeddd Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: inc ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /opt/local/bin/duplicity -v9 test file:///tmp/test2 Darwin jamus.org 10.5.0 Darwin Kernel Version 10.5.0: Fri Nov 5 23:20:39 PDT 2010; root:xnu-1504.9.17~1/RELEASE_I386 i386 i386 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python 2.7 (r27:82500, Nov 10 2010, 13:13:43) [GCC 4.2.1 (Apple Inc. build 5664)] ================================================================================ Using temporary directory /tmp/duplicity-rHj3Vh-tempdir Registering (mkstemp) temporary file /tmp/duplicity-rHj3Vh-tempdir/mkstemp- cScI30-1 Temp has 310436773888 available, backup will use approx 34078720. Synchronizing remote metadata to local cache... Deleting local /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- full-signatures.20101129T135516Z.sigtar.gz (not authoritative at backend). Deleting local /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- full.20101129T135516Z.manifest (not authoritative at backend). 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity-9KoRj8-tempdir Registering (mktemp) temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity-9KoRj8-tempdir/mktemp- jzOUvt-1 Using temporary directory /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- cho6d6-tempdir Registering (mktemp) temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- cho6d6-tempdir/mktemp-Ly_ulT-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-rHj3Vh- tempdir/mktemp-4ghA_B-2 Selecting test Comparing () and None Getting delta of (() test dir) and None A . Selecting test/test.txt Comparing ('test.txt',) and None Getting delta of (('test.txt',) test/test.txt reg) and None A test.txt uid -2 of file signature/test.txt not in range. Setting uid to 60001 gid -2 of file signature/test.txt not in range. Setting gid to 60001 uid -2 of file snapshot/test.txt not in range. Setting uid to 60001 gid -2 of file snapshot/test.txt not in range. Setting gid to 60001 Removing still remembered temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity-9KoRj8-tempdir/mktemp- jzOUvt-1 Cleanup of temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity-9KoRj8-tempdir/mktemp- jzOUvt-1 failed Removing still remembered temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- cho6d6-tempdir/mktemp-Ly_ulT-1 Cleanup of temporary file /Users/admin/.cache/duplicity/a8e08b1a8a3d633f5cd35a8446eaeddd/duplicity- cho6d6-tempdir/mktemp-Ly_ulT-1 failed AsyncScheduler: running task synchronously (asynchronicity disabled) Writing /tmp/test2/duplicity-full.20101129T135529Z.vol1.difftar.gpg Deleting /tmp/duplicity-rHj3Vh-tempdir/mktemp-4ghA_B-2 Forgetting temporary file /tmp/duplicity-rHj3Vh-tempdir/mktemp-4ghA_B-2 AsyncScheduler: task completed successfully Processed volume 1 Writing /tmp/test2/duplicity-full-signatures.20101129T135529Z.sigtar.gpg Writing /tmp/test2/duplicity-full.20101129T135529Z.manifest.gpg 3 files exist on backend 2 files exist in cache Extracting backup chains from list of files: ['duplicity-full- signatures.20101129T135529Z.sigtar.gpg', 'duplicity- full.20101129T135529Z.manifest.gpg', 'duplicity- full.20101129T135529Z.vol1.difftar.gpg'] File duplicity-full-signatures.20101129T135529Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20101129T135529Z.sigtar.gpg' File duplicity-full.20101129T135529Z.manifest.gpg is not part of a known set; creating new set File duplicity-full.20101129T135529Z.vol1.difftar.gpg is part of known set Found backup chain [Mon Nov 29 07:55:29 2010]-[Mon Nov 29 07:55:29 2010] --------------[ Backup Statistics ]-------------- StartTime 1291038929.51 (Mon Nov 29 07:55:29 2010) EndTime 1291038929.57 (Mon Nov 29 07:55:29 2010) ElapsedTime 0.05 (0.05 seconds) SourceFiles 2 SourceFileSize 102 (102 bytes) NewFiles 2 NewFileSize 102 (102 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 2 RawDeltaSize 0 (0 bytes) TotalDestinationSizeChange 240 (240 bytes) Errors 0 ------------------------------------------------- Removing still remembered temporary file /tmp/duplicity-rHj3Vh- tempdir/mkstemp-cScI30-1 ```",8 118022816,2010-11-28 20:25:05.598,"corrupted backup, CRC check failed (lp:#682469)","[Original report](https://bugs.launchpad.net/bugs/682469) created by **TTimo (ttimo)** ``` duplicity 0.6.10 and 0.6.11 I have a 22G backup that is corrupted. That was being backed up from an OSX MacBook Pro with duplicity 0.6.10 compiled against OSX 10.5's python (Leopard). I can't check the exact version as this machine died now. The data was being backed up to Amazon S3. I am trying to recover the backup on a Linux machine with duplicity 0.6.11 after copying the data locally, but it fails with a CRC error after about 5G of data being processed. I was able to recover more of the data past the initial CRC check failure by asking for specific folders and files, and I eventually narrowed things down to a short list of files that could not be recovered. I am attaching the error log for the first file that I can not recover. I doubt there is much way to recover all the individual files at this point, but I thought I'd point out a few things: - I highly recommend the --test-restore option getting done (https://bugs.launchpad.net/bugs/643973) - It took some serious script-fu to continue recovering past the initial CRC error. There's still a significant amount of data that is lost, but it's not that bad. Duplicity should have an option to continue working through the backup and attempt to recover as much as it can. ```",10 118018817,2010-11-18 20:38:14.221,Feature Request: Make restore support include/exclude parameters (lp:#677177),"[Original report](https://bugs.launchpad.net/bugs/677177) created by **Daniel Hahler (blueyed)** ``` When you have to retrieve multiple files, you currently have to call duplicity multiple times using ""--file-to-restore"" for every single file/folder. It would be useful to specify patterns like with the exclude/include lists. ```",58 118022802,2010-11-18 00:33:36.250,IOError: CRC check failed (lp:#676767),"[Original report](https://bugs.launchpad.net/bugs/676767) created by **Lukas (l-niemeyer)** ``` NOTE FOR PEOPLE EXPERIENCING THIS PROBLEM: The instructions at http://live.gnome.org/DejaDup/Help/Restore/WorstCase seem to help people recover from this. Duplicity works, but Deja Dup doesn't. Original report (edited for more brevity) below. ----------------- Hi, I can't restore my backup, deja dup shows an error: ----------------- DUPLICITY: ERROR 30 IOError DUPLICITY: . Traceback (most recent call last): DUPLICITY: . File ""/usr/bin/duplicity"", line 1257, in DUPLICITY: . with_tempdir(main) DUPLICITY: . File ""/usr/bin/duplicity"", line 1250, in with_tempdir DUPLICITY: . fn() DUPLICITY: . File ""/usr/bin/duplicity"", line 1204, in main DUPLICITY: . restore(col_stats) DUPLICITY: . File ""/usr/bin/duplicity"", line 539, in restore DUPLICITY: . restore_get_patched_rop_iter(col_stats)): DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/patchdir.py"", line 522, in Write_ROPaths DUPLICITY: . ITR( ropath.index, ropath ) DUPLICITY: . File ""/usr/lib/python2.6/dist-packages/duplicity/lazy.py"", line 335, in __call__ DUPLICITY: . last_branch.fast_process, args) DUPLICITY: . File ""/usr/lib/python2.6/dist-packages/duplicity/robust.py"", line 37, in check_common_error DUPLICITY: . return function(*args) DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/patchdir.py"", line 575, in fast_process DUPLICITY: . ropath.copy( self.base_path.new_index( index ) ) DUPLICITY: . File ""/usr/lib/python2.6/dist-packages/duplicity/path.py"", line 416, in copy DUPLICITY: . other.writefileobj(self.open(""rb"")) DUPLICITY: . File ""/usr/lib/python2.6/dist-packages/duplicity/path.py"", line 591, in writefileobj DUPLICITY: . buf = fin.read(_copy_blocksize) DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/patchdir.py"", line 200, in read DUPLICITY: . if not self.addtobuffer(): DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/patchdir.py"", line 221, in addtobuffer DUPLICITY: . self.buffer += fp.read() DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/tarfile.py"", line 1338, in _readnormal DUPLICITY: . return self.fileobj.read(bytestoread) DUPLICITY: . File ""/usr/lib/python2.6/dist- packages/duplicity/dup_temp.py"", line 204, in read DUPLICITY: . return self.fileobj.read(length) DUPLICITY: . File ""/usr/lib/python2.6/gzip.py"", line 219, in read DUPLICITY: . self._read(readsize) DUPLICITY: . File ""/usr/lib/python2.6/gzip.py"", line 284, in _read DUPLICITY: . self._read_eof() DUPLICITY: . File ""/usr/lib/python2.6/gzip.py"", line 304, in _read_eof DUPLICITY: . hex(self.crc))) DUPLICITY: . IOError: CRC check failed 0xbfacd5c8L != 0xca281a1cL DUPLICITY: . ** (deja-dup:4599): DEBUG: DuplicityInstance.vala:553: duplicity (4613) exited with value 30 --------------------- Version of deja-dup and duplicity deja-dup 16.0-0ubuntu1 duplicity 0.6.10-0ubuntu1 --------------- System: Ubuntu 10.04.1 LTS I backed up the Data using Ubuntu 10.04 as well. Created the backup on an external harddrive. Formated my harddisk today and reinstalled Ubuntu 10.04. Then I tried to restore the data. Hope you can help me. ``` Original tags: restore",30 118018764,2010-11-16 14:04:43.900,"duplicity falsely reports succeeded backup, empty manifest file (lp:#676042)","[Original report](https://bugs.launchpad.net/bugs/676042) created by **9johnny (s.j.)** ``` duplicity version duplicity 0.6.09 Python 2.5.2 Distributor ID: Debian Description: Debian GNU/Linux 5.0.6 (lenny) Release: 5.0.6 Codename: lenny ==backup log snip== ionice -c 3 nice -n 9 duplicity incremental --full-if-older-than 2W --asynchronous-upload --no-encryption --asynchronous-upload --no-encryption --exclude '/tmp/**' --exclude '/proc/**' --exclude '/dev/**' --exclude '/sys/**' --exclude '/root/.cache/**' --exclude '/var/cache/duplicity/**' --exclude '/var/marsnet/duplicity/**' --exclude '/var/alternc/mnt/**' --exclude '/var/lib/mysql/**' --exclude '/var/alternc/db/**' --exclude '/var/log/mysql/**' / ssh://backup@mybckhost//opt/marsnet/backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Oct 28 01:13:25 2010 Last full backup is too old, forcing full backup --------------[ Backup Statistics ]-------------- StartTime 1289607215.28 (Sat Nov 13 01:13:35 2010) EndTime 1289636885.88 (Sat Nov 13 09:28:05 2010) ElapsedTime 29670.59 (8 hours 14 minutes 30.59 seconds) SourceFiles 1730868 SourceFileSize 106161258939 (98.9 GB) NewFiles 1730868 NewFileSize 106161244989 (98.9 GB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 1730868 RawDeltaSize 105419449446 (98.2 GB) TotalDestinationSizeChange 65491887291 (61.0 GB) Errors 0 ------------------------------------------------- ++ hostname --fqdn + ionice -c 3 nice -n 9 duplicity remove-all-but-n-full 1 --force --no- encryption ssh://backup@mybckhost//opt/marsnet/backups/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sat Nov 13 01:13:30 2010 Deleting backup set at time: Thu Oct 28 01:13:25 2010 Deleting this file from backend: duplicity-full-signatures.20101027T231325Z.sigtar.gz ==end snip== This is the log from the backup that failed. Ihave conditional execution so backups aren't removed if duplicity fails. In this case there was no error, but I found an empty manifest file in the destination directory. Sorry, i don't use extra verbose output in my backup scripts. All the restore scripts fail silently: they see the archive as empty, this could be the reason why duplicity reported success on exit? (very annoying, i just had to restore a file, i'm trying to recover at least a part of it using simple gunzip and tar..) ```",20 118019193,2010-11-15 12:39:55.794,Fix documentation: Why not tar? (lp:#675520),"[Original report](https://bugs.launchpad.net/bugs/675520) created by **nodata (ubuntu-nodata)** ``` The page ""Why not tar?"" on the duplicity website discusses problems with tar: http://duplicity.nongnu.org/new_format.html A recent HN discussion addressed most of these points, maybe this could lead to improvements in duplicity? I quote: ""The first two issues -- a lack of index and the fact that you can't seek within a deflated tarball -- are true but are easily handled by smarter compression. Tarsnap, for example, splits off archive headers and stores them separately in order to speed up archive scanning. The third issue -- lack of support for modern filesystem features -- is just plain wrong. Sure, the tar in 7th edition UNIX didn't support these, but modern tars support modern filesystem features. The fourth issue -- general cruft -- is correct but irrelevant on modern tars since the problems caused by the cruft are eliminated via pax extension headers."" -- http://news.ycombinator.com/item?id=1665875 ```",16 118018703,2010-10-28 16:16:07.112,duplicity prefers fully-qualified-domain-name (fqdn) over hostname (lp:#667885),"[Original report](https://bugs.launchpad.net/bugs/667885) created by **nodata (ubuntu-nodata)** ``` Duplicity determines the machine's hostname in order to warn the user about unexpectedly backing up to the same location from two machines. However, it does this using socket.getfqdn(). It seems many users expect the value of socket.gethostname() instead. Now, I don't fully understand exactly the difference between the two calls, so I'm not necessarily advocating for this change. But gethostname() seems to be what most home users at least expect. Is the situation different for server users? If we made this change, we'd have to be careful to gracfully accept previous uses of getfqdn() that we wrote to manifests. I'd be willing to whip up a patch, Ken, if you think this is a sensible change. Examples: ========= (This original report) Duplicity gives: localhost6.localdomain6 $ cat /etc/hostname mybox $ cat /etc/hosts 192.168.1.5 mybox # Added by NetworkManager 127.0.0.1 localhost.localdomain localhost ::1 mybox localhost6.localdomain6 localhost6 ========= (Bug 1086068) Duplicity gives: localhost $ cat /etc/hostname computername $ cat /etc/hosts 127.0.0.1 localhost computername ========= ```",48 118019552,2010-10-01 05:16:03.391,Date format is inconsistent in the output of `duplicity collection-status` (lp:#652696),"[Original report](https://bugs.launchpad.net/bugs/652696) created by **zpcspm (zpcspm)** ``` I have a wrapper that parses the output of `duplicity collection-status` to make decisions about running a full or an incremental backup. Today it failed. I was assuming that the day of the month is always two digits, so I was counting for: Fri Oct 01 00:00:05 2010 but I've got Fri Oct 1 00:00:05 2010 I'm using duplicity 0.6.10 It would be nice to make the date format consistent, unless there's a strong reason against it. ``` Original tags: wishlist",16 118019548,2010-09-22 00:38:29.279,"Minor typo in manpage ("" its' "") (lp:#644820)","[Original report](https://bugs.launchpad.net/bugs/644820) created by **Aaron Whitehouse (aaron-whitehouse)** ``` The man page reads: ""namely ssh through its’ utility routines scp and sftp"" it should read: ""namely ssh through its utility routines scp and sftp"" ```",6 118019541,2010-09-15 03:40:10.555,sftp backend: falsely reports ok if target fs was r/o (lp:#638629),"[Original report](https://bugs.launchpad.net/bugs/638629) created by **az (az-debian)** ``` this is a copy of debian bug #596857, which lives here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=596857 synopsis: when the target fs is mounted read-only and you're using sftp, then duplicity claims success for backups even though it couldn't save a single bit. this somehow happens only if the fs is mounted r/o - ""normal"" insufficient permissions result in a misleading error message (""Invalid SSH password"") and lots of retries. if you use --use-scp, you get lots of retries and 'scp failed' errors. the original submitter reported this for 0.6.08 and i've confirmed it for 0.6.09. ```",8 118019539,2010-09-14 20:47:35.971,Assert on line 93 fails (lp:#638436),"[Original report](https://bugs.launchpad.net/bugs/638436) created by **Rob Fortune (usedonlytosignup)** ``` I'd love to be more helpful but you didn't put any text there saying what that assertion was about, or I might have tried and fixed it. # duplicity --volsize 5 --asynchronous-upload --verbosity 9 --exclude '/tmp/' --exclude-other-filesystems --encrypt-key DEADBEEF / 'imaps://SECRET:SQUIRREL@imap.gmail.com' Using archive dir: /root/.cache/duplicity/aff86cdb7c6e7107a22eb3996c8af071 Using backup name: aff86cdb7c6e7107a22eb3996c8af071 Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.giobackend Failed: No module named gio Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Succeeded I'm ImapBackend (scheme imaps) connecting to imap.gmail.com as SECRET Type of imap class: IMAP4_SSL IMAP connected Main action: inc ================================================================================ duplicity 0.6.08b (March 11, 2010) Args: /usr/bin/duplicity --volsize 5 --asynchronous-upload --verbosity 9 --exclude /tmp/ --exclude-other-filesystems --encrypt-key DEADBEEF / imaps://SECRET:SQUIRREL@imap.gmail.com Linux yottagray 2.6.18-194.8.1.el5.028stab070.4 #1 SMP Tue Aug 17 19:11:52 MSD 2010 i686 i686 /usr/bin/python 2.6.5 (r265:79063, Jul 5 2010, 11:47:21) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] ================================================================================ Using temporary directory /tmp/duplicity-TwoQwm-tempdir Registering (mkstemp) temporary file /tmp/duplicity-TwoQwm-tempdir/mkstemp- ZuvbgK-1 Temp has 26162319360 available, backup will use approx 12058624. IMAP LIST: duplicity-full.20100914T233255Z.vol1.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol2.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol3.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol4.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol5.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol6.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol7.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol8.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol9.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol10.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol11.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol12.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol13.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol14.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol15.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol16.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol17.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol18.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol19.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol20.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol21.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol22.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol23.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol24.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol25.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol26.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol27.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol28.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol29.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol30.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol31.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol32.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol33.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol34.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol35.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol36.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol37.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol38.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol39.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol40.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol41.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol42.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol43.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol43.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol44.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol45.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol46.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol47.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol48.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol49.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol50.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol51.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol52.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol53.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol54.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol55.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol56.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol57.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol57.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol58.difftar.gpg IMAP LIST: duplicity-full-signatures.20100914T233255Z.sigtar.gpg IMAP LIST: duplicity-full-signatures.20100914T233255Z.sigtar.gpg IMAP LIST: duplicity-full.20100914T233255Z.manifest.gpg Local and Remote metadata are synchronized, no sync needed. IMAP LIST: duplicity-full.20100914T233255Z.vol1.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol2.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol3.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol4.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol5.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol6.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol7.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol8.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol9.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol10.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol11.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol12.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol13.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol14.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol15.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol16.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol17.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol18.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol19.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol20.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol21.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol22.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol23.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol24.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol25.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol26.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol27.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol28.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol29.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol30.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol31.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol32.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol33.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol34.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol35.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol36.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol37.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol38.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol39.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol40.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol41.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol42.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol43.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol43.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol44.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol45.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol46.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol47.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol48.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol49.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol50.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol51.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol52.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol53.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol54.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol55.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol56.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol57.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol57.difftar.gpg IMAP LIST: duplicity-full.20100914T233255Z.vol58.difftar.gpg IMAP LIST: duplicity-full-signatures.20100914T233255Z.sigtar.gpg IMAP LIST: duplicity-full-signatures.20100914T233255Z.sigtar.gpg IMAP LIST: duplicity-full.20100914T233255Z.manifest.gpg 63 files exist on backend 2 files exist in cache Extracting backup chains from list of files: ['duplicity- full.20100914T233255Z.vol1.difftar.gpg', 'duplicity- full.20100914T233255Z.vol2.difftar.gpg', 'duplicity- full.20100914T233255Z.vol3.difftar.gpg', 'duplicity- full.20100914T233255Z.vol4.difftar.gpg', 'duplicity- full.20100914T233255Z.vol5.difftar.gpg', 'duplicity- full.20100914T233255Z.vol6.difftar.gpg', 'duplicity- full.20100914T233255Z.vol7.difftar.gpg', 'duplicity- full.20100914T233255Z.vol8.difftar.gpg', 'duplicity- full.20100914T233255Z.vol9.difftar.gpg', 'duplicity- full.20100914T233255Z.vol10.difftar.gpg', 'duplicity- full.20100914T233255Z.vol11.difftar.gpg', 'duplicity- full.20100914T233255Z.vol12.difftar.gpg', 'duplicity- full.20100914T233255Z.vol13.difftar.gpg', 'duplicity- full.20100914T233255Z.vol14.difftar.gpg', 'duplicity- full.20100914T233255Z.vol15.difftar.gpg', 'duplicity- full.20100914T233255Z.vol16.difftar.gpg', 'duplicity- full.20100914T233255Z.vol17.difftar.gpg', 'duplicity- full.20100914T233255Z.vol18.difftar.gpg', 'duplicity- full.20100914T233255Z.vol19.difftar.gpg', 'duplicity- full.20100914T233255Z.vol20.difftar.gpg', 'duplicity- full.20100914T233255Z.vol21.difftar.gpg', 'duplicity- full.20100914T233255Z.vol22.difftar.gpg', 'duplicity- full.20100914T233255Z.vol23.difftar.gpg', 'duplicity- full.20100914T233255Z.vol24.difftar.gpg', 'duplicity- full.20100914T233255Z.vol25.difftar.gpg', 'duplicity- full.20100914T233255Z.vol26.difftar.gpg', 'duplicity- full.20100914T233255Z.vol27.difftar.gpg', 'duplicity- full.20100914T233255Z.vol28.difftar.gpg', 'duplicity- full.20100914T233255Z.vol29.difftar.gpg', 'duplicity- full.20100914T233255Z.vol30.difftar.gpg', 'duplicity- full.20100914T233255Z.vol31.difftar.gpg', 'duplicity- full.20100914T233255Z.vol32.difftar.gpg', 'duplicity- full.20100914T233255Z.vol33.difftar.gpg', 'duplicity- full.20100914T233255Z.vol34.difftar.gpg', 'duplicity- full.20100914T233255Z.vol35.difftar.gpg', 'duplicity- full.20100914T233255Z.vol36.difftar.gpg', 'duplicity- full.20100914T233255Z.vol37.difftar.gpg', 'duplicity- full.20100914T233255Z.vol38.difftar.gpg', 'duplicity- full.20100914T233255Z.vol39.difftar.gpg', 'duplicity- full.20100914T233255Z.vol40.difftar.gpg', 'duplicity- full.20100914T233255Z.vol41.difftar.gpg', 'duplicity- full.20100914T233255Z.vol42.difftar.gpg', 'duplicity- full.20100914T233255Z.vol43.difftar.gpg', 'duplicity- full.20100914T233255Z.vol43.difftar.gpg', 'duplicity- full.20100914T233255Z.vol44.difftar.gpg', 'duplicity- full.20100914T233255Z.vol45.difftar.gpg', 'duplicity- full.20100914T233255Z.vol46.difftar.gpg', 'duplicity- full.20100914T233255Z.vol47.difftar.gpg', 'duplicity- full.20100914T233255Z.vol48.difftar.gpg', 'duplicity- full.20100914T233255Z.vol49.difftar.gpg', 'duplicity- full.20100914T233255Z.vol50.difftar.gpg', 'duplicity- full.20100914T233255Z.vol51.difftar.gpg', 'duplicity- full.20100914T233255Z.vol52.difftar.gpg', 'duplicity- full.20100914T233255Z.vol53.difftar.gpg', 'duplicity- full.20100914T233255Z.vol54.difftar.gpg', 'duplicity- full.20100914T233255Z.vol55.difftar.gpg', 'duplicity- full.20100914T233255Z.vol56.difftar.gpg', 'duplicity- full.20100914T233255Z.vol57.difftar.gpg', 'duplicity- full.20100914T233255Z.vol57.difftar.gpg', 'duplicity- full.20100914T233255Z.vol58.difftar.gpg', 'duplicity-full- signatures.20100914T233255Z.sigtar.gpg', 'duplicity-full- signatures.20100914T233255Z.sigtar.gpg', 'duplicity- full.20100914T233255Z.manifest.gpg'] File duplicity-full.20100914T233255Z.vol1.difftar.gpg is not part of a known set; creating new set File duplicity-full.20100914T233255Z.vol2.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol3.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol4.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol5.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol6.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol7.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol8.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol9.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol10.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol11.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol12.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol13.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol14.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol15.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol16.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol17.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol18.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol19.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol20.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol21.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol22.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol23.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol24.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol25.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol26.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol27.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol28.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol29.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol30.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol31.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol32.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol33.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol34.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol35.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol36.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol37.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol38.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol39.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol40.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol41.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol42.difftar.gpg is part of known set File duplicity-full.20100914T233255Z.vol43.difftar.gpg is part of known set Removing still remembered temporary file /tmp/duplicity-TwoQwm- tempdir/mkstemp-ZuvbgK-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1239, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1232, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1137, in main globals.archive_dir).set_values() File ""/usr/lib/python2.6/site-packages/duplicity/collections.py"", line 681, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.6/site-packages/duplicity/collections.py"", line 804, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.6/site-packages/duplicity/collections.py"", line 794, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.6/site-packages/duplicity/collections.py"", line 93, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20100914T233255Z.vol1.difftar.gpg', 2: 'duplicity-full.20100914T233255Z.vol2.difftar.gpg', 3: 'duplicity- full.20100914T233255Z.vol3.difftar.gpg', 4: 'duplicity- full.20100914T233255Z.vol4.difftar.gpg', 5: 'duplicity- full.20100914T233255Z.vol5.difftar.gpg', 6: 'duplicity- full.20100914T233255Z.vol6.difftar.gpg', 7: 'duplicity- full.20100914T233255Z.vol7.difftar.gpg', 8: 'duplicity- full.20100914T233255Z.vol8.difftar.gpg', 9: 'duplicity- full.20100914T233255Z.vol9.difftar.gpg', 10: 'duplicity- full.20100914T233255Z.vol10.difftar.gpg', 11: 'duplicity- full.20100914T233255Z.vol11.difftar.gpg', 12: 'duplicity- full.20100914T233255Z.vol12.difftar.gpg', 13: 'duplicity- full.20100914T233255Z.vol13.difftar.gpg', 14: 'duplicity- full.20100914T233255Z.vol14.difftar.gpg', 15: 'duplicity- full.20100914T233255Z.vol15.difftar.gpg', 16: 'duplicity- full.20100914T233255Z.vol16.difftar.gpg', 17: 'duplicity- full.20100914T233255Z.vol17.difftar.gpg', 18: 'duplicity- full.20100914T233255Z.vol18.difftar.gpg', 19: 'duplicity- full.20100914T233255Z.vol19.difftar.gpg', 20: 'duplicity- full.20100914T233255Z.vol20.difftar.gpg', 21: 'duplicity- full.20100914T233255Z.vol21.difftar.gpg', 22: 'duplicity- full.20100914T233255Z.vol22.difftar.gpg', 23: 'duplicity- full.20100914T233255Z.vol23.difftar.gpg', 24: 'duplicity- full.20100914T233255Z.vol24.difftar.gpg', 25: 'duplicity- full.20100914T233255Z.vol25.difftar.gpg', 26: 'duplicity- full.20100914T233255Z.vol26.difftar.gpg', 27: 'duplicity- full.20100914T233255Z.vol27.difftar.gpg', 28: 'duplicity- full.20100914T233255Z.vol28.difftar.gpg', 29: 'duplicity- full.20100914T233255Z.vol29.difftar.gpg', 30: 'duplicity- full.20100914T233255Z.vol30.difftar.gpg', 31: 'duplicity- full.20100914T233255Z.vol31.difftar.gpg', 32: 'duplicity- full.20100914T233255Z.vol32.difftar.gpg', 33: 'duplicity- full.20100914T233255Z.vol33.difftar.gpg', 34: 'duplicity- full.20100914T233255Z.vol34.difftar.gpg', 35: 'duplicity- full.20100914T233255Z.vol35.difftar.gpg', 36: 'duplicity- full.20100914T233255Z.vol36.difftar.gpg', 37: 'duplicity- full.20100914T233255Z.vol37.difftar.gpg', 38: 'duplicity- full.20100914T233255Z.vol38.difftar.gpg', 39: 'duplicity- full.20100914T233255Z.vol39.difftar.gpg', 40: 'duplicity- full.20100914T233255Z.vol40.difftar.gpg', 41: 'duplicity- full.20100914T233255Z.vol41.difftar.gpg', 42: 'duplicity- full.20100914T233255Z.vol42.difftar.gpg', 43: 'duplicity- full.20100914T233255Z.vol43.difftar.gpg'}, 'duplicity- full.20100914T233255Z.vol43.difftar.gpg') ```",6 118019530,2010-09-13 20:03:56.129,Resuming incremental backup fails with KeyError in setLastSaved (lp:#637528),"[Original report](https://bugs.launchpad.net/bugs/637528) created by **Daniel Hahler (blueyed)** ``` I am experimenting with duply (a duplicity frontend), and after having canceled a running backup, it now fails to resume: duplicity 0.6.09 (July 25, 2010) Args: /usr/bin/duplicity --name duply_profile --no-encryption --verbosity 5 --volsize 100 --exclude-globbing-filelist /home/user/.duply/profile/exclude / file:///mnt/foo/bar Linux base 2.6.35-20-generic #29-Ubuntu SMP Fri Sep 3 14:49:14 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Aug 24 2010, 21:47:18) [GCC 4.4.5 20100816 (prerelease)] ================================================================================ Using temporary directory /tmp/duplicity-Ew6tSE-tempdir Temp has 4154806272 available, backup will use approx 136314880. Local and Remote metadata are synchronized, no sync needed. Added incremental Backupset (start_time: Mon Sep 13 19:45:01 2010 / end_time: Mon Sep 13 19:52:53 2010) Added incremental Backupset (start_time: Mon Sep 13 19:52:53 2010 / end_time: Mon Sep 13 20:20:27 2010) Added incremental Backupset (start_time: Mon Sep 13 20:20:27 2010 / end_time: Mon Sep 13 21:20:02 2010) Last inc backup left a partial set, restarting. Last full backup date: Mon Sep 13 19:45:01 2010 Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1251, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1244, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1226, in main     incremental_backup(sig_chain)   File ""/usr/bin/duplicity"", line 487, in incremental_backup     globals.backend)   File ""/usr/bin/duplicity"", line 254, in write_multivol     globals.restart.setLastSaved(mf)   File ""/usr/bin/duplicity"", line 1100, in setLastSaved     vi = mf.volume_info_dict[self.start_vol] KeyError: 58 I have added ""print mf.volume_info_dict.keys()"" before this, and it displays: [128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127] This is the traceback from the abortion: ^CTraceback (most recent call last):   File ""/usr/bin/duplicity"", line 1251, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1244, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1226, in main     incremental_backup(sig_chain)   File ""/usr/bin/duplicity"", line 487, in incremental_backup     globals.backend)   File ""/usr/bin/duplicity"", line 296, in write_multivol     at_end = gpg.GzipWriteFile(tarblock_iter, tdp.name, globals.volsize)   File ""/usr/lib/python2.6/dist-packages/duplicity/gpg.py"", line 331, in GzipWriteFile     new_block = block_iter.next(min(128*1024, bytes_to_go))   File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 505, in next     result = self.process(self.input_iter.next(), size)   File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 631, in process     data, last_block = self.get_data_block(fp, size - 512)   File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 658, in get_data_block     buf = fp.read(read_size)   File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 416, in read     self.sig_gen.update(buf)   File ""/usr/lib/python2.6/dist-packages/duplicity/librsync.py"", line 197, in update     if self.process_buffer():   File ""/usr/lib/python2.6/dist-packages/duplicity/librsync.py"", line 203, in process_buffer     eof, len_buf_read, cycle_out = self.sig_maker.cycle(self.buffer) KeyboardInterrupt I will copy/keep the exact duply profile and backup state, in case you can make use of any more information. ```",10 118019514,2010-09-08 11:45:24.217,Duplicity is failing with an assertion error when looking for a non-existent manifest file (lp:#633101),"[Original report](https://bugs.launchpad.net/bugs/633101) created by **Kenneth Loafman (kenneth-loafman)** ``` Binary package hint: duplicity Dupliciity refuses to back my files and throws up an assertion error where it appears to be trying to use a non-existent manifest.part file. That file doesn't exist and I cannot find where it is referenced. Here is the traceback: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1149, in main globals.archive_dir).set_values() File ""/usr/lib/python2.6/dist-packages/duplicity/collections.py"", line 676, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.6/dist-packages/duplicity/collections.py"", line 799, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.6/dist-packages/duplicity/collections.py"", line 789, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.6/dist-packages/duplicity/collections.py"", line 89, in add_filename self.set_manifest(filename) File ""/usr/lib/python2.6/dist-packages/duplicity/collections.py"", line 118, in set_manifest remote_filename) AssertionError: ('duplicity-full.20100817T193647Z.manifest.part', 'duplicity-full.20100817T193647Z.manifest') ProblemType: Bug DistroRelease: Ubuntu 10.10 Package: duplicity 0.6.09-0ubuntu2 ProcVersionSignature: Ubuntu 2.6.35-20.29-generic 2.6.35.4 Uname: Linux 2.6.35-20-generic i686 Architecture: i386 Date: Wed Sep 8 12:18:29 2010 InstallationMedia: Ubuntu-Netbook 10.10 ""Maverick Meerkat"" - Alpha i386 (20100803.1) ProcEnviron: PATH=(custom, user) LANG=en_GB.utf8 SHELL=/bin/bash SourcePackage: duplicity ``` Original tags: apport-bug i386 maverick ubuntu-une",32 118019272,2010-09-03 08:59:15.157,[Feature Request] support xz for compressing volumes (lp:#629357),"[Original report](https://bugs.launchpad.net/bugs/629357) created by **Rob Fortune (usedonlytosignup)** ``` xz offers a superior compression ratio at the cost of higher CPU load during creation of backup, but still decompresses (when you need it most) at a rapid pace. I don't know if it should be the default but adding it as an option could save yottabytes of bandwidth :) ``` Original tags: patch",86 118019269,2010-08-17 02:13:12.186,sftp backend establishes a new connection for every action (lp:#619016),"[Original report](https://bugs.launchpad.net/bugs/619016) created by **Max Kanat-Alexander (mkanat)** ``` duplicity 0.6.08b python 2.4.3 CentOS 5.5 When using the SFTP backend (really the scp:// backend, but I suppose they're the same now), duplicity creates a new connection for every single file that it wants to download or upload. For the manifest and signature files in particular, this can add 30 minutes to a backup process. Since SFTP is a persistent protocol and initial session negotiation is slow, it would probably be faster to just make the connection once and then issue commands over it, instead of re-making the connection for every upload or download. ```",12 118019263,2010-08-07 01:28:46.629,Fine grained control of full vs incremental backups (lp:#614631),"[Original report](https://bugs.launchpad.net/bugs/614631) created by **zpcspm (zpcspm)** ``` Currently (I'm using 0.6.09), if 'full' or 'incremental' actions are not specified explicitly, duplicity creates an incremental backup if it detects a chain, or a full backup otherwise. This behavior is somehow limited and I think I have an enhancement suggestion. Assume there would be an option named --chain-length. If called without explicit 'full' or 'incremental' actions, but with `--chain-length 10`, duplicity would still do a full backup if it would detect a recent chain consisting of 1 full backup and 9 incremental backups. I think this would add some useful flexibility (unless I'm missing something and it could be done without having to parse the output of collection-status action first, for enforcing an explicit 'full' action for certain chain lengths). ```",6 118019510,2010-08-03 23:08:01.450,CTRL-C before first volume - temporary files not cleaned up (lp:#613243),"[Original report](https://bugs.launchpad.net/bugs/613243) created by **Liraz (liraz-siri)** ``` If I CTRL-C before the first volume is completed, temporary files created in the cache are not cleaned up. Clues may be found in the exception raised on CTRL-C: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1226, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 487, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 294, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.5/site-packages/duplicity/gpg.py"", line 279, in GPGWriteFile data = block_iter.next(min(block_size, bytes_to_go)).data File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 505, in next result = self.process(self.input_iter.next(), size) File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 187, in get_delta_iter for new_path, sig_path in collated: File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 265, in collate2iters relem1 = riter1.next() File ""/usr/lib/python2.5/site-packages/duplicity/selection.py"", line 174, in Iterate subpath, val = diryield_stack[-1].next() File ""/usr/lib/python2.5/site-packages/duplicity/selection.py"", line 141, in diryield for filename in robust.listpath(path): File ""/usr/lib/python2.5/site-packages/duplicity/robust.py"", line 61, in listpath dir_listing = check_common_error(error_handler, path.listdir) File ""/usr/lib/python2.5/site-packages/duplicity/robust.py"", line 37, in check_common_error return function(*args) File ""/usr/lib/python2.5/site-packages/duplicity/path.py"", line 514, in listdir return os.listdir(self.name) KeyboardInterrupt Exception exceptions.TypeError: ""'NoneType' object is not callable"" in > ignored Exception exceptions.TypeError: ""'NoneType' object is not callable"" in > ignored ```",10 118019494,2010-07-27 20:53:38.631,duplicity fails on ERROR 30 TypeError (lp:#610603),"[Original report](https://bugs.launchpad.net/bugs/610603) created by **Serge Stroobandt (serge-stroobandt)** ``` Hi there, Since recently, I am experiencing a problem with duplicity. On an up-to-date bubba excito machine, duplicity fails to make a backup on an ext3 mount and dumps the following error. The only thing that I might have done wrong is that I have manually removed previously failed backup archives because the disk was full. Apparently, I am not the only one with this error. See also deja-dup forum: https://bugs.launchpad.net/deja-dup/+bug/545486 What should I do to get my automated backups running again ??? NOTICE 1 . Reading globbing filelist /home/admin/.backup/weekly/excludeglob.list NOTICE 1 . Reading globbing filelist /home/admin/.backup/weekly/includeglob.list INFO 1 . Using temporary directory /tmp/duplicity-sc4Wlo-tempdir ERROR 30 TypeError . Traceback (most recent call last): . File ""/usr/bin/duplicity"", line 589, in ? . with_tempdir(main) . File ""/usr/bin/duplicity"", line 582, in with_tempdir . fn() . File ""/usr/bin/duplicity"", line 510, in main . globals.archive_dir).set_values() . File ""/usr/lib/python2.4/site-packages/duplicity/collections.py"", line 557, in set_values . self.warn(sig_chain_warning) . File ""/usr/lib/python2.4/site-packages/duplicity/collections.py"", line 626, in warn . + ""\n"" + ""\n"".join(self.other_sig_names), . TypeError: bad operand type for unary + . ```",6 118019492,2010-07-20 10:31:21.651,RFE: Method to list all files within a backup set (lp:#607670),"[Original report](https://bugs.launchpad.net/bugs/607670) created by **David Anderson (q-launchpad-net-dw-perspective-org-uk)** ``` I use duplicity for on-line backup. Each time a full backup is done I also remove a previous full backup away to off-line storage. To do this, I use ""ls -c"" to list the files in order of date, and then manually remove the old ones. It would be very helpful if duplicity had the facility to list the complete set of backup files in a particular set, i.e. in between full backups. e.g. duplicity list-duplicity-files -t 7D target_url This would list all the signature files, archive, manifests, etc., that are in the full backup set active 7 days ago and all increments up until the next full backup set after that. ```",12 118019487,2010-06-30 21:08:31.863,rsync backend password file (lp:#600391),"[Original report](https://bugs.launchpad.net/bugs/600391) created by **Orair (gustavo-orair)** ``` Actually I need to use duplicity to backup remotely. This backup will also save my server's /etc directory. As stated in https://help.ubuntu.com/community/rsync ""The rsync daemon is an alternative to SSH for remote backups. Although more difficult to configure, it does provide some benefits. For example, using SSH to make a remote backup of an entire system requires that the SSH daemon allow root login, which is considered a security risk. Using the rsync daemon allows for root login via SSH to be disabled."" So, I presumed I need to use duplicity through rsync backend and also use gpg encryption to encrypt my backup files. For security reasons I cannot pass any of my passwords by arguments in command line to duplicity. In the gpg-key case I need to create a gpg key and use --gpg-options '-- passphrase-file=${GPG_KEY}'. But I need to make a similar procedure to rsync. So, I have password saved in a file. The rsync utility provides the option --password-file to inform the password saved in a file. So, the following command will synchronize using rsync (/etc/password contains the password): rsync -avz --password-file /etc/rsyncd.password /backup/ user@backup_server::backup_module/etc Then I need to pass the --password-file option to rsync backend in a command such as the following: duplicity --gpg-options '--passphrase-file=${GPG_KEY}' --rsync-options '-- password-file /etc/rsyncd.password' /etc rsync://user@backup_server::backup_module/etc What is the correct way to do this? Is there any other way to achieve my needs? Best, Orair. ```",10 118019484,2010-06-18 15:15:35.086,Some translations not installed (lp:#595974),"[Original report](https://bugs.launchpad.net/bugs/595974) created by **Gabor Karsay (gabor-karsay)** ``` Some translations are not installed, although they are translatable in launchpad or there are po-files for them. This is probably because they are missing in the po/LINGUAS file. For example 0.7 series has 13 translatable languages in launchpad 9 po files in directory po/ 7 languages in po/LINGUAS The situation is similar in 0.6 series. ```",4 118019483,2010-06-08 18:15:07.653,error text when repository corrupt is misleading (lp:#591391),"[Original report](https://bugs.launchpad.net/bugs/591391) created by **matt.wartell (matt-wartell+lp)** ``` During a run of duplicity where the previous run had failed due to a key error, I repeatedly hit the line: assert dup_time.curtime != dup_time.prevtime, ""time not moving forward at appropriate pace - system clock issues?"" when the fault was that ""duplicity cleanup"" needed to be run. I apologize for not having saved logs. This bug could either be considered a documentation bug or a call for more robust consistency checking of the repository. $ duplicity --version duplicity 0.6.08b $ python --version Python 2.6.5 $ uname -a Linux tallguy.local 2.6.32-22-core2 #33 SMP Tue May 25 08:51:25 EDT 2010 i686 GNU/Linux # Ubuntu lucid with very minor kernel option changes This is not related to (and happened prior to https://bugs.launchpad.net/duplicity/+bug/591364) ```",6 118019476,2010-05-04 08:58:20.543,"""DiffDirException: Bad tarinfo name"" on diff. backup (lp:#575010)","[Original report](https://bugs.launchpad.net/bugs/575010) created by **Moritz Maisel (mail-maisel)** ``` After running for weeks without problems, differntial backup now fails reproducible with the following traceback (also included the last 5 lines of log-data): ---->8----*snip*---- tarfile: Bad Checksum Removing still remembered temporary file /tmp/duplicity-ZDESlF- tempdir/mkstemp-UP21kB-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1239, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1232, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1214, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 478, in incremental_backup bytes_written = dummy_backup(tarblock_iter) File ""/usr/bin/duplicity"", line 160, in dummy_backup while tarblock_iter.next(): File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 505, in next result = self.process(self.input_iter.next(), size) File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 187, in get_delta_iter for new_path, sig_path in collated: File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 275, in collate2iters relem2 = riter2.next() File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 344, in combine_path_iters refresh_triple_list(triple_list) File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 330, in refresh_triple_list new_triple = get_triple(old_triple[1]) File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 316, in get_triple path = path_iter_list[iter_index].next() File ""/usr/lib/python2.6/dist-packages/duplicity/diffdir.py"", line 236, in sigtar2path_iter raise DiffDirException(""Bad tarinfo name %s"" % (tarinfo.name,)) DiffDirException: Bad tarinfo name .purple/logs/jabber/bar@domain.com/.system/20000000000000000000000000000000000000000000000000000 ---->8----*snip*---- (Note: I replaced mail and im IDs due to privacy concerns. If you need ""real"" data I can provide that via PM.) To me it looks like duplicity is expecting some file named "".purple/logs/jabber/bar@domain.com/.system/20000000000000000000000000000000000000000000000000000"" which does not exist. System-info: duplicity 0.6.08 Ubuntu karmic Python 2.6.4 Target filesystem is FTP ```",6 118019451,2010-04-27 09:04:42.605,"""No backup chains found"" after ""Deleting /...[] (not authoritative at backend).!"" on s3 target (lp:#570586)","[Original report](https://bugs.launchpad.net/bugs/570586) created by **Jock (c-launchpad-dermot-org-uk)** ``` Full detail here: https://answers.launchpad.net/duplicity/+question/107074 After performing a full encrypted backup to S3, subsequent duplicity operations cause the local cache for that backup to be deleted, preventing further manipulation or verification of the backup. However, carrying out the same steps (full backup then verify) using a local file:// backend rather than S3 works as expected and the local cache is not deleted on subsequent operations. Using duplicity 0.6.08b from Debian squeeze (testing) with python-boto 1.2a-1 and python 2.5.2. After a full encrypted backup to S3, I then run a verify on my newly- created backup using the following command (again with AWS credentials exported first): duplicity verify \ --encrypt-key=MYKEYHERE \ --sign-key=MYKEYHERE \ --tempdir=/var/tmp \ ""s3+http://my-bucket-name"" / I then get this output from duplicity: Synchronizing remote metadata to local cache... Deleting local /root/.cache/duplicity/96e3afbc9cfaced5608c3540469d2d09/duplicity-full- signatures.20100411T144157Z.sigtar.gz (not authoritative at backend). Deleting local /root/.cache/duplicity/96e3afbc9cfaced5608c3540469d2d09/duplicity- full.20100411T144157Z.manifest (not authoritative at backend). Last full backup date: none Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1200, in main verify(col_stats) File ""/usr/bin/duplicity"", line 653, in verify collated = diffdir.collate2iters(restore_get_patched_rop_iter(col_stats), File ""/usr/bin/duplicity"", line 560, in restore_get_patched_rop_iter backup_chain = col_stats.get_backup_chain_at_time(time) File ""/usr/lib/python2.5/site-packages/duplicity/collections.py"", line 934, in get_backup_chain_at_time raise CollectionsError(""No backup chains found"") CollectionsError: No backup chains found ERROR:duplicity:Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1251, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1244, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1200, in main verify(col_stats) File ""/usr/bin/duplicity"", line 653, in verify collated = diffdir.collate2iters(restore_get_patched_rop_iter(col_stats), File ""/usr/bin/duplicity"", line 560, in restore_get_patched_rop_iter backup_chain = col_stats.get_backup_chain_at_time(time) File ""/usr/lib/python2.5/site-packages/duplicity/collections.py"", line 934, in get_backup_chain_at_time raise CollectionsError(""No backup chains found"") CollectionsError: No backup chains found My local cache is deleted at the beginning of the process and duplicity is then unable to verify the backup. Repopulating the cache manually (decrypting and gzip'ing the manifest and sigtar as appropriate) and then rerunning the verify operation deletes the cache again. Carrying out the same steps (full backup then verify) using a local file:// backend rather than S3 works okay. ``` Original tags: cache local s3",38 118020884,2014-11-02 17:39:06.909,Dropbox Backend App Key got lost in 0.7.0 (lp:#1388600),"[Original report](https://bugs.launchpad.net/bugs/1388600) created by **mm (mtl-0)** ``` Hi! After upgrading to 0.7.0 my dropbox backend seems to be broken. When I try to backup to dropbox duplicity states the following error: Attempt 1 failed. ErrorResponse: [403] u""Invalid app key (consumer key). Check your app's configuration to make sure everything is correct."" Attempt 2 failed. ErrorResponse: [403] u""Invalid app key (consumer key). Check your app's configuration to make sure everything is correct."" Attempt 3 failed. ErrorResponse: [403] u""Invalid app key (consumer key). Check your app's configuration to make sure everything is correct."" Attempt 4 failed. ErrorResponse: [403] u""Invalid app key (consumer key). Check your app's configuration to make sure everything is correct."" I fixed it by inserting the old dropbox key in dpbxbackend.py and compiling it. ```",18 118020861,2014-10-31 05:19:17.497,Unable to update to latest duplicity through apt-get (lp:#1387942),"[Original report](https://bugs.launchpad.net/bugs/1387942) created by **Gaurav Ashtikar (gau1991)** ``` Duplicity Version: 0.7.0-0ubuntu0ppa1012~ubuntu12.04.1 Python version: Python 2.7.3 (default, Feb 27 2014, 19:58:35) OS: Ubuntu 12.04.5 LTS Hi, I was updating packages, thorugh apt-get upgrade, duplicity given me following error: Setting up duplicity (0.7.0-0ubuntu0ppa1012~ubuntu12.04.1) ... SyntaxError: ('invalid syntax', ('/usr/lib/python2.7/dist- packages/duplicity/backends/gdocsbackend.py', 51, 197, "" self.client = gdata.docs.client.DocsClient(source='duplicity Usage: dpkg Options: parser options: output formats, defaults to 'dpkg' for compatibility with dpkg than version is lower than 0) ')\n"")) dpkg: error processing duplicity (--configure): subprocess installed post-installation script returned error exit status 101 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: duplicity E: Sub-process /usr/bin/dpkg returned an error code (1) ```",14 118020857,2014-10-30 16:52:20.792,rdiffdir: Broken sigs for newly created files (lp:#1387786),"[Original report](https://bugs.launchpad.net/bugs/1387786) created by **David Coppit (coppit)** ``` This is potentially a very bad bug. It causes rdiffdir to create invalid signature files, and the resulting delta files are full of zeroes. 1) Create a shell script with the commands below. 2) Run it. You should see ""Done"" 3) Instead you see: Delta doesn't appear valid Files FILE1 and FILE2 differ Delta did not properly patch FILE Done I reproduced this on CentOS 6.4 and OS X 10.10 (Yosemite). Some observations: * If you run the commands one by one manually it works * The sig file seems to be the problem * If you create a larger file, say 10MB using dd, it will succeed perhaps 1 in 20 times. Otherwise this script only succeeds about 1 in 7000 times. I could speculate that librsync is using a lower-level file access API that is bypassing OS file caches. Let me know if I need to file this bug with librsync instead. RDIFFDIR=rdiffdir rm -f FILE1 FILE2 echo 0123456789 > FILE1 echo xxxxxxxxxx > FILE2 #rm -f FILE1 FILE2 #dd if=/dev/urandom of=FILE1 bs=10M count=1 2>/dev/null #cp FILE1 FILE2 #echo ""a"" >> FILE2 rm -f FILE1.sig $RDIFFDIR sig FILE1 FILE1.sig rm -f FILE2.delta $RDIFFDIR delta FILE1.sig FILE2 FILE2.delta xxd FILE2.delta | grep diff -q || echo ""Delta doesn't appear valid"" $RDIFFDIR patch FILE1 FILE2.delta diff -q FILE1 FILE2 || echo ""Delta did not properly patch FILE"" echo ""Done"" ``` Original tags: librsync rdiffdir",6 118020839,2014-10-22 09:11:35.644,gdocs and mega backend fail with par2 (lp:#1384129),"[Original report](https://bugs.launchpad.net/bugs/1384129) created by **laurentl (laurent-lavaud)** ``` Hello, I use the latest duplicity version 0.6.25 with python 2.6, i run the command on a QNAP server. without the par2+ option, everything works well. error log for GDOCS backend: Duplicity 0.6 series is being deprecated: See http://www.nongnu.org/duplicity/ Using archive dir: /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c Using backup name: 42c50b0eead9ebba415b3f454e57b95c Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Main action: inc ================================================================================ duplicity 0.6.25 (October 20, 2014) Args: /opt/bin/duplicity -v9 --encrypt-key=xxx --sign-key=xxx --use-agent --allow-source-mismatch --tempdir=/share/MD0_DATA/.duplicity/temp /share/MD0_DATA/data/documents par2+gdocs://xxx/duplicity/documents Linux NASC5ED37 3.4.6 #1 Fri Oct 3 18:32:07 CST 2014 armv5tel /opt/bin/python2.6 2.6.8 (unknown, Apr 12 2012, 13:02:25) [GCC 4.2.3] ================================================================================ Using temporary directory /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ- tempdir Registering (mkstemp) temporary file /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ-tempdir/mkstemp-BfITE5-1 Temp has 1950688108544 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: Par2WrapperBackend Archive dir: /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-aKEnwt- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-aKEnwt- tempdir/mktemp-H7NfKO-1 Using temporary directory /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-b95Oqb- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-b95Oqb- tempdir/mktemp-YanfYm-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ-tempdir/mktemp-pfGEZX-2 Selecting /share/MD0_DATA/data/documents Comparing . and None Getting delta of (. dir) and None A . Selecting /share/MD0_DATA/data/documents/astronomie Comparing astronomie and None Getting delta of (astronomie dir) and None A astronomie Selecting /share/MD0_DATA/data/documents/astronomie/Astrométrie.pdf Comparing astronomie/Astrométrie.pdf and None Getting delta of (astronomie/Astrométrie.pdf reg) and None A astronomie/Astrométrie.pdf Selecting /share/MD0_DATA/data/documents/astronomie/AviStack_eng.pdf Comparing astronomie/AviStack_eng.pdf and None Getting delta of (astronomie/AviStack_eng.pdf reg) and None A astronomie/AviStack_eng.pdf Selecting /share/MD0_DATA/data/documents/astronomie/Ciel en fete-20110515-imagerie planetaire.pdf Comparing astronomie/Ciel en fete-20110515-imagerie planetaire.pdf and None Getting delta of (astronomie/Ciel en fete-20110515-imagerie planetaire.pdf reg) and None A astronomie/Ciel en fete-20110515-imagerie planetaire.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_CCD_urbain.pdf Comparing astronomie/RCE_2010_CCD_urbain.pdf and None Getting delta of (astronomie/RCE_2010_CCD_urbain.pdf reg) and None A astronomie/RCE_2010_CCD_urbain.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Choix_CCD.pdf Comparing astronomie/RCE_2010_Choix_CCD.pdf and None Getting delta of (astronomie/RCE_2010_Choix_CCD.pdf reg) and None A astronomie/RCE_2010_Choix_CCD.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Optimisation_C14.pdf Comparing astronomie/RCE_2010_Optimisation_C14.pdf and None Getting delta of (astronomie/RCE_2010_Optimisation_C14.pdf reg) and None A astronomie/RCE_2010_Optimisation_C14.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Photographier_ISS.pdf Comparing astronomie/RCE_2010_Photographier_ISS.pdf and None Getting delta of (astronomie/RCE_2010_Photographier_ISS.pdf reg) and None A astronomie/RCE_2010_Photographier_ISS.pdf Selecting /share/MD0_DATA/data/documents/astronomie/Stage_traitement_d.image.pdf Comparing astronomie/Stage_traitement_d.image.pdf and None Getting delta of (astronomie/Stage_traitement_d.image.pdf reg) and None A astronomie/Stage_traitement_d.image.pdf Selecting /share/MD0_DATA/data/documents/astronomie/ameliorer_c8.pdf Comparing astronomie/ameliorer_c8.pdf and None Getting delta of (astronomie/ameliorer_c8.pdf reg) and None A astronomie/ameliorer_c8.pdf Removing still remembered temporary file /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-aKEnwt- tempdir/mktemp-H7NfKO-1 Removing still remembered temporary file /root/.cache/duplicity/42c50b0eead9ebba415b3f454e57b95c/duplicity-b95Oqb- tempdir/mktemp-YanfYm-1 AsyncScheduler: running task synchronously (asynchronicity disabled) Making directory /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ- tempdir/duplicity_temp.1 Create Par2 recovery files Deleting /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ- tempdir/duplicity_temp.1/duplicity-full.20141021T121651Z.vol1.difftar.gpg Attempt 1 failed: BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 118, in put % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 175, in __handle_error raise BackendException(message) BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Attempt 2 failed: BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 118, in put % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 175, in __handle_error raise BackendException(message) BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Attempt 3 failed: BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 118, in put % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 175, in __handle_error raise BackendException(message) BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Attempt 4 failed: BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 118, in put % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/gdocsbackend.py"", line 175, in __handle_error raise BackendException(message) BackendException: Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Failed to upload file 'duplicity- full.20141021T121651Z.vol1.difftar.gpg.vol000+200.par2' to remote folder 'documents': Server responded with: 400, Releasing lockfile Removing still remembered temporary file /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ-tempdir/mktemp-pfGEZX-2 Removing still remembered temporary file /share/MD0_DATA/.duplicity/temp/duplicity-Ck8oeJ-tempdir/mkstemp-BfITE5-1 Cleanup of temporary directory /share/MD0_DATA/.duplicity/temp/duplicity- Ck8oeJ-tempdir failed - this is probably a bug. ####################################################################### error log for MEGA backend: Duplicity 0.6 series is being deprecated: See http://www.nongnu.org/duplicity/ Using archive dir: /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03 Using backup name: e41bb6d0173ab4e11729dba8df503e03 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Main action: inc ================================================================================ duplicity 0.6.25 (October 20, 2014) Args: /opt/bin/duplicity -v9 --encrypt-key=xxx --sign-key=xxx --use-agent --allow-source-mismatch --tempdir=/share/MD0_DATA/.duplicity/temp /share/MD0_DATA/data/documents par2+mega://xxx@mega.co.nz/duplicity/documents Linux NASC5ED37 3.4.6 #1 Fri Oct 3 18:32:07 CST 2014 armv5tel /opt/bin/python2.6 2.6.8 (unknown, Apr 12 2012, 13:02:25) [GCC 4.2.3] ================================================================================ Using temporary directory /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir Registering (mkstemp) temporary file /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/mkstemp-ufItZI-1 Temp has 1957940158464 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: Par2WrapperBackend Archive dir: /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-XcjmQL- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-XcjmQL- tempdir/mktemp-yfsjYA-1 Using temporary directory /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-PBb6HT- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-PBb6HT- tempdir/mktemp-ejSmok-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/mktemp-fRPqbs-2 Selecting /share/MD0_DATA/data/documents Comparing . and None Getting delta of (. dir) and None A . Selecting /share/MD0_DATA/data/documents/astronomie Comparing astronomie and None Getting delta of (astronomie dir) and None A astronomie Selecting /share/MD0_DATA/data/documents/astronomie/Astrométrie.pdf Comparing astronomie/Astrométrie.pdf and None Getting delta of (astronomie/Astrométrie.pdf reg) and None A astronomie/Astrométrie.pdf Selecting /share/MD0_DATA/data/documents/astronomie/AviStack_eng.pdf Comparing astronomie/AviStack_eng.pdf and None Getting delta of (astronomie/AviStack_eng.pdf reg) and None A astronomie/AviStack_eng.pdf Selecting /share/MD0_DATA/data/documents/astronomie/Ciel en fete-20110515-imagerie planetaire.pdf Comparing astronomie/Ciel en fete-20110515-imagerie planetaire.pdf and None Getting delta of (astronomie/Ciel en fete-20110515-imagerie planetaire.pdf reg) and None A astronomie/Ciel en fete-20110515-imagerie planetaire.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_CCD_urbain.pdf Comparing astronomie/RCE_2010_CCD_urbain.pdf and None Getting delta of (astronomie/RCE_2010_CCD_urbain.pdf reg) and None A astronomie/RCE_2010_CCD_urbain.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Choix_CCD.pdf Comparing astronomie/RCE_2010_Choix_CCD.pdf and None Getting delta of (astronomie/RCE_2010_Choix_CCD.pdf reg) and None A astronomie/RCE_2010_Choix_CCD.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Optimisation_C14.pdf Comparing astronomie/RCE_2010_Optimisation_C14.pdf and None Getting delta of (astronomie/RCE_2010_Optimisation_C14.pdf reg) and None A astronomie/RCE_2010_Optimisation_C14.pdf Selecting /share/MD0_DATA/data/documents/astronomie/RCE_2010_Photographier_ISS.pdf Comparing astronomie/RCE_2010_Photographier_ISS.pdf and None Getting delta of (astronomie/RCE_2010_Photographier_ISS.pdf reg) and None A astronomie/RCE_2010_Photographier_ISS.pdf Selecting /share/MD0_DATA/data/documents/astronomie/Stage_traitement_d.image.pdf Comparing astronomie/Stage_traitement_d.image.pdf and None Getting delta of (astronomie/Stage_traitement_d.image.pdf reg) and None A astronomie/Stage_traitement_d.image.pdf Selecting /share/MD0_DATA/data/documents/astronomie/ameliorer_c8.pdf Comparing astronomie/ameliorer_c8.pdf and None Getting delta of (astronomie/ameliorer_c8.pdf reg) and None A astronomie/ameliorer_c8.pdf Removing still remembered temporary file /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-XcjmQL- tempdir/mktemp-yfsjYA-1 Removing still remembered temporary file /root/.cache/duplicity/e41bb6d0173ab4e11729dba8df503e03/duplicity-PBb6HT- tempdir/mktemp-ejSmok-1 AsyncScheduler: running task synchronously (asynchronicity disabled) Making directory /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1 Create Par2 recovery files Deleting /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg Attempt 1 failed: BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 85, in put % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 143, in __handle_error raise BackendException(message) BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Attempt 2 failed: BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 85, in put % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 143, in __handle_error raise BackendException(message) BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Attempt 3 failed: BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 85, in put % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 143, in __handle_error raise BackendException(message) BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Attempt 4 failed: BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Backtrace of previous error: Traceback (innermost last): File ""/opt/lib/python2.6/site-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 85, in put % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors) File ""/opt/lib/python2.6/site- packages/duplicity/backends/megabackend.py"", line 143, in __handle_error raise BackendException(message) BackendException: Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Failed to upload file '/share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/duplicity_temp.1/duplicity- full.20141022T085432Z.vol1.difftar.gpg.par2' to remote folder 'documents': local variable 'i' referenced before assignment Releasing lockfile Removing still remembered temporary file /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/mkstemp-ufItZI-1 Removing still remembered temporary file /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir/mktemp-fRPqbs-2 Cleanup of temporary directory /share/MD0_DATA/.duplicity/temp/duplicity-N1jox7-tempdir failed - this is probably a bug. ```",8 118020835,2014-10-16 20:14:04.667,missing python-oauth (lp:#1382215),"[Original report](https://bugs.launchpad.net/bugs/1382215) created by **rdesfo (rdesfo)** ``` there is a bug in the deja-dup repo that is resolved by installing [python- oauth](https://bugs.launchpad.net/deja-dup/+bug/1184225). Should 'python-oauth' be added as a dependency to duplicity? ```",6 118020802,2014-10-15 20:53:17.404,rdiffdir: Support user-specified output dir for better multi-diff performance (lp:#1381754),"[Original report](https://bugs.launchpad.net/bugs/1381754) created by **David Coppit (coppit)** ``` Here's the use case that I'm using rdiffdir for: 1) Create a base VM and distribute it 2) Create version 1 of the VM from the base, compute binary diff, distribute it 3) On remote hosts, copy the base VM, and apply the diff 4) Create version 2 of the VM from the base, compute the binary diff, distribute it 5) On remote hosts, copy the base VM, and apply the diff So these are not incremental diffs, but rather all based off the ""base"" VM. On the remote hosts, the following is happening: 1) copy VM image from cache to target dir 2) rdiffdir copies the VM image from target dir to temp file, then patches it So there's an extra copy in there that is killing my performance. It's actually faster to download a full VM image (one full write) instead of downloading the patch and patching the previously downloaded image (2 full reads, 1 full write, 1 delta write) What I'd like to be able to do is have rdiffdir allow me to specify an output directory for the patched files. The default would be as it is today, which is to patch the same directory as the input. But with the option, one could leave the original files untouched, and store the patched files into the output directory. duplicity version 0.6.24 python 2.7.5 Any OS ``` Original tags: rdiffdir",6 118020800,2014-10-08 18:47:55.676,rdiffdir: pipe STDIN doesn't work. Docs unclear (lp:#1378986),"[Original report](https://bugs.launchpad.net/bugs/1378986) created by **David Coppit (coppit)** ``` The man page says this: rdiffdir [options] sig[nature] basis_dir signature_file rdiffdir [options] delta full_sigtar {incr_sigtar} new_dir delta_file rdiffdir [options] patch basis_dir delta_file rdiffdir [options] tar basis_dir tar_file If signature_file or delta_file are ""-"", the data will be read from stdin or written to stdout as appropriate. The last sentence is confusing, because neither signature_file or delta_file are inputs. Perhaps the list should include full_sigtar? Or maybe full_sigtar should be renamed to signature_file? I don't know if this should be a separate bug, but it's related. This doesn't work: $ rdiffdir sig . /tmp/file.sig $ cat /tmp/file.sig | rdiffdir delta - . /tmp/file.delta Traceback (most recent call last): File ""/usr/local/Cellar/duplicity/0.6.24/libexec/bin/rdiffdir"", line 232, in main() File ""/usr/local/Cellar/duplicity/0.6.24/libexec/bin/rdiffdir"", line 217, in main write_delta(file_args[-2], sig_infp, delta_outfp) File ""/usr/local/Cellar/duplicity/0.6.24/libexec/bin/rdiffdir"", line 179, in write_delta diffdir.write_block_iter(delta_iter, outfp) File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 719, in write_block_iter for block in block_iter: File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 518, in next result = self.process(self.input_iter.next()) File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 190, in get_delta_iter for new_path, sig_path in collated: File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 281, in collate2iters relem2 = riter2.next() File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 346, in combine_path_iters range(len(path_iter_list)))) File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 322, in get_triple path = path_iter_list[iter_index].next() File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/diffdir.py"", line 232, in sigtar2path_iter tf = util.make_tarfile(""r"", sigtarobj) File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/util.py"", line 113, in make_tarfile tf = tarfile.TarFile(""arbitrary"", mode, fp) File ""/usr/local/Cellar/duplicity/0.6.24/lib/python2.7/site- packages/duplicity/tarfile.py"", line 1565, in __init__ self.offset = self.fileobj.tell() IOError: [Errno 29] Illegal seek But this does: $ rdiffdir delta - . /tmp/file.delta < /tmp/file.sig ``` Original tags: rdiffdir",6 118020791,2014-10-06 14:10:10.245,rdiffdir RFE: Support omitting timestamp in gzip'd signature and delta files (lp:#1377959),"[Original report](https://bugs.launchpad.net/bugs/1377959) created by **David Coppit (coppit)** ``` In rdiffdir, the -z option produces signature and delta files that have an embedded timestamp. gzip has the --no-name option, which will omit the timestamp and some other info, so that the resulting file is predictably the same. The reason this is important is that people may run md5sum on the signature or delta in order to see if it's been generated before. Timestamps make that impossible. I looked at the file after running gzip --no-name, and it appears that gzip inserts a ""0"" for the time. So in python, it looks like you can simply pass ""mtime=0"" to the gzip.GzipFile() function. I would submit a patch, but I'm not sure if you want this to be optional, or if duplicity requires the embedded timestamp for some reason. The workaround is to run ""gzip --no-name"" on the files after running rdiffdir. $ rdiffdir -V rdiffdir 0.6.24 $ python --version Python 2.7.5 $ uname -a Darwin dhcp-10-20-61-194.nvidia.com 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64 i386 MacBookPro10,1 Darwin ``` Original tags: rdiffdir",6 118020765,2014-10-01 23:22:42.278,gdocs restore running out of memory on low spec macnine (lp:#1376506),"[Original report](https://bugs.launchpad.net/bugs/1376506) created by **mfitz (mfitz)** ``` Hi. Google file restores and verifies are failing for my setup. I also found this problem with whichever older version comes with Ubuntu 12.05. I changed everything to Debian Wheezy and 0.6.24. Redid full backup with new version to be sure. Same problem. Only 'unusual' settings I can imagine are: * 500MB ram no swap * long filepaths with spaces and dollars * volsize 256 * reading and writing ntfs external drive * --rsync-options=""--bwlimit=65"" during upload No problems evident in upload process and size of upload consistent with expected backup size. Please advise! --------- Duplicity version: 0.6.24-1~bpo70+1 Python version: 2.7.3-4+deb7u1 OS Distro annd version: Debian GNU/Linux 7 (wheezy) Type of target filesystem: ntfs-3g drobo logs: dummy@drobopc:~$ duplicity restore --name=allPhotos --tempdir /mnt/drobo/tmp --file-to-restore AA/BB/\$My\ Pictures/Our\ Wedding/foo090808_030.jpg gdocs://suppressed0username@gmail.com/duplicity bob Using archive dir: /home/dummy/.cache/duplicity/allPhotos Using backup name: allPhotos Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/bin/duplicity restore --name=allPhotos --verbosity 9 --tempdir /mnt/drobo/tmp --file-to-restore AA/BB/$My Pictures/Our Wedding/foo090808_030.jpg gdocs://suppressed0username@gmail.com/duplicity bob Linux drobopc 3.2.0-4-686-pae #1 SMP Debian 3.2.60-1+deb7u3 i686 /usr/bin/python 2.7.3 (default, Mar 14 2014, 11:57:14) [GCC 4.7.2] ================================================================================ Using temporary directory /mnt/drobo/tmp/duplicity-1mMql0-tempdir Registering (mkstemp) temporary file /mnt/drobo/tmp/duplicity-1mMql0-tempdir/mkstemp-K2SDpw-1 Temp has 16616548478976 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 155 files exist on backend 4 files exist in cache Extracting backup chains from list of files: [u'duplicity- full.20140927T213804Z.manifest.gpg', u'duplicity-full- signatures.20140927T213804Z.sigtar.gpg', u'duplicity- full.20140927T213804Z.vol153.difftar.gpg', u'duplicity- full.20140927T213804Z.vol152.difftar.gpg', u'duplicity- full.20140927T213804Z.vol151.difftar.gpg', u'duplicity- full.20140927T213804Z.vol150.difftar.gpg', u'duplicity- full.20140927T213804Z.vol149.difftar.gpg', u'duplicity- full.20140927T213804Z.vol148.difftar.gpg', u'duplicity- full.20140927T213804Z.vol147.difftar.gpg', u'duplicity- full.20140927T213804Z.vol146.difftar.gpg', u'duplicity- full.20140927T213804Z.vol145.difftar.gpg', u'duplicity- full.20140927T213804Z.vol144.difftar.gpg', u'duplicity- full.20140927T213804Z.vol143.difftar.gpg', u'duplicity- full.20140927T213804Z.vol142.difftar.gpg', u'duplicity- full.20140927T213804Z.vol141.difftar.gpg', u'duplicity- full.20140927T213804Z.vol140.difftar.gpg', u'duplicity- full.20140927T213804Z.vol139.difftar.gpg', u'duplicity- full.20140927T213804Z.vol138.difftar.gpg', u'duplicity- full.20140927T213804Z.vol137.difftar.gpg', u'duplicity- full.20140927T213804Z.vol136.difftar.gpg', u'duplicity- full.20140927T213804Z.vol135.difftar.gpg', u'duplicity- full.20140927T213804Z.vol134.difftar.gpg', u'duplicity- full.20140927T213804Z.vol133.difftar.gpg', u'duplicity- full.20140927T213804Z.vol132.difftar.gpg', u'duplicity- full.20140927T213804Z.vol131.difftar.gpg', u'duplicity- full.20140927T213804Z.vol130.difftar.gpg', u'duplicity- full.20140927T213804Z.vol129.difftar.gpg', u'duplicity- full.20140927T213804Z.vol128.difftar.gpg', u'duplicity- full.20140927T213804Z.vol127.difftar.gpg', u'duplicity- full.20140927T213804Z.vol126.difftar.gpg', u'duplicity- full.20140927T213804Z.vol125.difftar.gpg', u'duplicity- full.20140927T213804Z.vol124.difftar.gpg', u'duplicity- full.20140927T213804Z.vol123.difftar.gpg', u'duplicity- full.20140927T213804Z.vol122.difftar.gpg', u'duplicity- full.20140927T213804Z.vol121.difftar.gpg', u'duplicity- full.20140927T213804Z.vol120.difftar.gpg', u'duplicity- full.20140927T213804Z.vol119.difftar.gpg', u'duplicity- full.20140927T213804Z.vol118.difftar.gpg', u'duplicity- full.20140927T213804Z.vol117.difftar.gpg', u'duplicity- full.20140927T213804Z.vol116.difftar.gpg', u'duplicity- full.20140927T213804Z.vol115.difftar.gpg', u'duplicity- full.20140927T213804Z.vol114.difftar.gpg', u'duplicity- full.20140927T213804Z.vol113.difftar.gpg', u'duplicity- full.20140927T213804Z.vol112.difftar.gpg', u'duplicity- full.20140927T213804Z.vol111.difftar.gpg', u'duplicity- full.20140927T213804Z.vol110.difftar.gpg', u'duplicity- full.20140927T213804Z.vol109.difftar.gpg', u'duplicity- full.20140927T213804Z.vol108.difftar.gpg', u'duplicity- full.20140927T213804Z.vol107.difftar.gpg', u'duplicity- full.20140927T213804Z.vol106.difftar.gpg', u'duplicity- full.20140927T213804Z.vol105.difftar.gpg', u'duplicity- full.20140927T213804Z.vol104.difftar.gpg', u'duplicity- full.20140927T213804Z.vol103.difftar.gpg', u'duplicity- full.20140927T213804Z.vol102.difftar.gpg', u'duplicity- full.20140927T213804Z.vol101.difftar.gpg', u'duplicity- full.20140927T213804Z.vol100.difftar.gpg', u'duplicity- full.20140927T213804Z.vol99.difftar.gpg', u'duplicity- full.20140927T213804Z.vol98.difftar.gpg', u'duplicity- full.20140927T213804Z.vol97.difftar.gpg', u'duplicity- full.20140927T213804Z.vol96.difftar.gpg', u'duplicity- full.20140927T213804Z.vol95.difftar.gpg', u'duplicity- full.20140927T213804Z.vol94.difftar.gpg', u'duplicity- full.20140927T213804Z.vol93.difftar.gpg', u'duplicity- full.20140927T213804Z.vol92.difftar.gpg', u'duplicity- full.20140927T213804Z.vol91.difftar.gpg', u'duplicity- full.20140927T213804Z.vol90.difftar.gpg', u'duplicity- full.20140927T213804Z.vol89.difftar.gpg', u'duplicity- full.20140927T213804Z.vol88.difftar.gpg', u'duplicity- full.20140927T213804Z.vol87.difftar.gpg', u'duplicity- full.20140927T213804Z.vol86.difftar.gpg', u'duplicity- full.20140927T213804Z.vol85.difftar.gpg', u'duplicity- full.20140927T213804Z.vol84.difftar.gpg', u'duplicity- full.20140927T213804Z.vol83.difftar.gpg', u'duplicity- full.20140927T213804Z.vol82.difftar.gpg', u'duplicity- full.20140927T213804Z.vol81.difftar.gpg', u'duplicity- full.20140927T213804Z.vol80.difftar.gpg', u'duplicity- full.20140927T213804Z.vol79.difftar.gpg', u'duplicity- full.20140927T213804Z.vol78.difftar.gpg', u'duplicity- full.20140927T213804Z.vol77.difftar.gpg', u'duplicity- full.20140927T213804Z.vol76.difftar.gpg', u'duplicity- full.20140927T213804Z.vol75.difftar.gpg', u'duplicity- full.20140927T213804Z.vol74.difftar.gpg', u'duplicity- full.20140927T213804Z.vol73.difftar.gpg', u'duplicity- full.20140927T213804Z.vol72.difftar.gpg', u'duplicity- full.20140927T213804Z.vol71.difftar.gpg', u'duplicity- full.20140927T213804Z.vol70.difftar.gpg', u'duplicity- full.20140927T213804Z.vol69.difftar.gpg', u'duplicity- full.20140927T213804Z.vol68.difftar.gpg', u'duplicity- full.20140927T213804Z.vol67.difftar.gpg', u'duplicity- full.20140927T213804Z.vol66.difftar.gpg', u'duplicity- full.20140927T213804Z.vol65.difftar.gpg', u'duplicity- full.20140927T213804Z.vol64.difftar.gpg', u'duplicity- full.20140927T213804Z.vol63.difftar.gpg', u'duplicity- full.20140927T213804Z.vol62.difftar.gpg', u'duplicity- full.20140927T213804Z.vol61.difftar.gpg', u'duplicity- full.20140927T213804Z.vol60.difftar.gpg', u'duplicity- full.20140927T213804Z.vol59.difftar.gpg', u'duplicity- full.20140927T213804Z.vol58.difftar.gpg', u'duplicity- full.20140927T213804Z.vol57.difftar.gpg', u'duplicity- full.20140927T213804Z.vol56.difftar.gpg', u'duplicity- full.20140927T213804Z.vol55.difftar.gpg', u'duplicity- full.20140927T213804Z.vol54.difftar.gpg', u'duplicity- full.20140927T213804Z.vol53.difftar.gpg', u'duplicity- full.20140927T213804Z.vol52.difftar.gpg', u'duplicity- full.20140927T213804Z.vol51.difftar.gpg', u'duplicity- full.20140927T213804Z.vol50.difftar.gpg', u'duplicity- full.20140927T213804Z.vol49.difftar.gpg', u'duplicity- full.20140927T213804Z.vol48.difftar.gpg', u'duplicity- full.20140927T213804Z.vol47.difftar.gpg', u'duplicity- full.20140927T213804Z.vol46.difftar.gpg', u'duplicity- full.20140927T213804Z.vol45.difftar.gpg', u'duplicity- full.20140927T213804Z.vol44.difftar.gpg', u'duplicity- full.20140927T213804Z.vol43.difftar.gpg', u'duplicity- full.20140927T213804Z.vol42.difftar.gpg', u'duplicity- full.20140927T213804Z.vol41.difftar.gpg', u'duplicity- full.20140927T213804Z.vol40.difftar.gpg', u'duplicity- full.20140927T213804Z.vol39.difftar.gpg', u'duplicity- full.20140927T213804Z.vol38.difftar.gpg', u'duplicity- full.20140927T213804Z.vol37.difftar.gpg', u'duplicity- full.20140927T213804Z.vol36.difftar.gpg', u'duplicity- full.20140927T213804Z.vol35.difftar.gpg', u'duplicity- full.20140927T213804Z.vol34.difftar.gpg', u'duplicity- full.20140927T213804Z.vol33.difftar.gpg', u'duplicity- full.20140927T213804Z.vol32.difftar.gpg', u'duplicity- full.20140927T213804Z.vol31.difftar.gpg', u'duplicity- full.20140927T213804Z.vol30.difftar.gpg', u'duplicity- full.20140927T213804Z.vol29.difftar.gpg', u'duplicity- full.20140927T213804Z.vol28.difftar.gpg', u'duplicity- full.20140927T213804Z.vol27.difftar.gpg', u'duplicity- full.20140927T213804Z.vol26.difftar.gpg', u'duplicity- full.20140927T213804Z.vol25.difftar.gpg', u'duplicity- full.20140927T213804Z.vol24.difftar.gpg', u'duplicity- full.20140927T213804Z.vol23.difftar.gpg', u'duplicity- full.20140927T213804Z.vol22.difftar.gpg', u'duplicity- full.20140927T213804Z.vol21.difftar.gpg', u'duplicity- full.20140927T213804Z.vol20.difftar.gpg', u'duplicity- full.20140927T213804Z.vol19.difftar.gpg', u'duplicity- full.20140927T213804Z.vol18.difftar.gpg', u'duplicity- full.20140927T213804Z.vol17.difftar.gpg', u'duplicity- full.20140927T213804Z.vol16.difftar.gpg', u'duplicity- full.20140927T213804Z.vol15.difftar.gpg', u'duplicity- full.20140927T213804Z.vol14.difftar.gpg', u'duplicity- full.20140927T213804Z.vol13.difftar.gpg', u'duplicity- full.20140927T213804Z.vol12.difftar.gpg', u'duplicity- full.20140927T213804Z.vol11.difftar.gpg', u'duplicity- full.20140927T213804Z.vol10.difftar.gpg', u'duplicity- full.20140927T213804Z.vol9.difftar.gpg', u'duplicity- full.20140927T213804Z.vol8.difftar.gpg', u'duplicity- full.20140927T213804Z.vol7.difftar.gpg', u'duplicity- full.20140927T213804Z.vol6.difftar.gpg', u'duplicity- full.20140927T213804Z.vol5.difftar.gpg', u'duplicity- full.20140927T213804Z.vol4.difftar.gpg', u'duplicity- full.20140927T213804Z.vol3.difftar.gpg', u'duplicity- full.20140927T213804Z.vol2.difftar.gpg', u'duplicity- full.20140927T213804Z.vol1.difftar.gpg'] File duplicity-full.20140927T213804Z.manifest.gpg is not part of a known set; creating new set File duplicity-full-signatures.20140927T213804Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20140927T213804Z.sigtar.gpg' File duplicity-full.20140927T213804Z.vol153.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol152.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol151.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol150.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol149.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol148.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol147.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol146.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol145.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol144.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol143.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol142.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol141.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol140.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol139.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol138.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol137.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol136.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol135.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol134.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol133.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol132.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol131.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol130.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol129.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol128.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol127.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol126.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol125.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol124.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol123.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol122.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol121.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol120.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol119.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol118.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol117.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol116.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol115.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol114.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol113.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol112.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol111.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol110.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol109.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol108.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol107.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol106.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol105.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol104.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol103.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol102.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol101.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol100.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol99.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol98.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol97.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol96.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol95.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol94.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol93.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol92.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol91.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol90.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol89.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol88.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol87.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol86.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol85.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol84.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol83.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol82.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol81.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol80.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol79.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol78.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol77.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol76.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol75.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol74.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol73.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol72.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol71.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol70.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol69.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol68.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol67.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol66.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol65.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol64.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol63.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol62.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol61.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol60.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol59.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol58.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol57.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol56.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol55.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol54.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol53.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol52.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol51.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol50.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol49.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol48.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol47.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol46.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol45.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol44.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol43.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol42.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol41.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol40.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol39.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol38.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol37.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol36.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol35.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol34.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol33.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol32.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol31.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol30.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol29.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol28.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol27.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol26.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol25.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol24.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol23.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol22.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol21.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol20.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol19.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol18.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol17.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol16.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol15.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol14.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol13.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol12.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol11.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol10.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol9.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol8.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol7.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol6.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol5.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol4.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol3.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol2.difftar.gpg is part of known set File duplicity-full.20140927T213804Z.vol1.difftar.gpg is part of known set Found backup chain [Sat Sep 27 22:38:04 2014]-[Sat Sep 27 22:38:04 2014] Last full backup date: Sat Sep 27 22:38:04 2014 Collection Status ----------------- Connecting with backend: GDocsBackend Archive directory: /home/dummy/.cache/duplicity/allPhotos Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sat Sep 27 22:38:04 2014 Chain end time: Sat Sep 27 22:38:04 2014 Number of contained backup sets: 1 Total number of contained volumes: 153 Type of backup set: Time: Number of volumes: Full Sat Sep 27 22:38:04 2014 153 ------------------------- No orphaned or incomplete backup sets found. Registering (mktemp) temporary file /mnt/drobo/tmp/duplicity-1mMql0-tempdir/mktemp-rhi_y2-2 Attempt 1 failed: BackendException: Failed to download file 'duplicity- full.20140927T213804Z.vol57.difftar.gpg' in remote folder 'duplicity': Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/gdocsbackend.py"", line 137, in get % (remote_filename, self.folder.title.text, str(e)), raise_errors) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/gdocsbackend.py"", line 175, in __handle_error raise BackendException(message) BackendException: Failed to download file 'duplicity- full.20140927T213804Z.vol57.difftar.gpg' in remote folder 'duplicity': ^CReleasing lockfile Removing still remembered temporary file /mnt/drobo/tmp/duplicity-1mMql0-tempdir/mkstemp-K2SDpw-1 Removing still remembered temporary file /mnt/drobo/tmp/duplicity-1mMql0-tempdir/mktemp-rhi_y2-2 Cleanup of temporary directory /mnt/drobo/tmp/duplicity-1mMql0-tempdir failed - this is probably a bug. INT intercepted...exiting. **END LOG********************************* ***BEGIN ***Successful backup command****: duplicity \ --name=allPhotos\ --full-if-older-than=6M\ --log-file=/mnt/drobo/duplicitylogfile.txt\ --verbosity 8\ --volsize 256\ --tempdir /mnt/drobo/tmp \ --rsync-options=""--bwlimit=65""\ /mnt/drobo/Photos \ gdocs://suppressed0username@gmail.com/duplicity ```",10 118020748,2014-09-24 20:33:49.837,"not found in archive, no files restored (lp:#1373610)","[Original report](https://bugs.launchpad.net/bugs/1373610) created by **mancu (mancurian)** ``` I have been backing up for a while now I need a file and having a classic issue which is the back up tools stopping when we really need them. I am trying to restore a file and getting this duplicity -v4 -r xxx/xxxxxx file:///cygdrive/v/xxxxx/DUPLICITY/xxxxxxx/ del Error 'basis_file must be a (true) file' patching . xxx/xxxxx not found in archive, no files restored. I tried it on linux and cygwin, same issue. ```",6 118020742,2014-09-24 09:30:31.010,Duplicity 0.6.24 doesn't show statistics after backup (lp:#1373327),"[Original report](https://bugs.launchpad.net/bugs/1373327) created by **sander eikelenboom (b-linux)** ``` After upgrading duplicity on my debian systems i noticed that the backup statistics that were printed after a backup by default are not there anymore. The commandline with which duplicity is invoked hasn't changed. Upgrade to 0.6.24 (backport package Debian wheezy) Before it used to print: -----------[Incremental]------------ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sun Sep 21 01:30:18 2014 --------------[ Backup Statistics ]-------------- StartTime 1411515020.44 (Wed Sep 24 01:30:20 2014) EndTime 1411515020.49 (Wed Sep 24 01:30:20 2014) ElapsedTime 0.05 (0.05 seconds) SourceFiles 1 SourceFileSize 4096 (4.00 KB) NewFiles 0 NewFileSize 0 (0 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 0 RawDeltaSize 0 (0 bytes) TotalDestinationSizeChange 103 (103 bytes) Errors 0 ------------------------------------------------- After it only prints: -----------[Incremental]------------ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sun Sep 21 01:30:18 2014 The manpage still says:        --no-print-statistics               By default duplicity will print statistics about the current session after a successful backup. This switch disables that behavior. So i would expect them to be still printed by default :) ```",6 118020741,2014-09-15 10:26:48.057,Feature Request: Option to forgo compression (or ability to specify tar options) (lp:#1369499),"[Original report](https://bugs.launchpad.net/bugs/1369499) created by **Fjodor (sune-molgaard)** ``` Hiya, I do duplicity backups to a local NAS, and from that, duplicity to remote storage. Now, the CPU on the NAS isn't very powerful, and since the first round of duplicity backup will already compress the files by default, I would like one or both of the following to be implemented: 1) Add an option for tar to skip compression or 2) Make a more general set of options to control the behaviour of tar, e.g. setting compression level, setting compression type, omitting compression etc. One possibility would be to implement 1), which should not be too difficult, and then work on 2) at a more leisurely pace. Best regards, Sune Mølgaard ```",6 118020736,2014-09-09 19:57:56.163,gpg-options bug (lp:#1367427),"[Original report](https://bugs.launchpad.net/bugs/1367427) created by **BRULE Herman (brule-herman)** ``` Hello, --gpg-options ""--compress-algo=bzip2 --bzip2-compress-level=9"" with duplicity do a bug: GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== usage: gpg [options] [filename] ===== End GnuPG log ===== Gpg work with this options: gpg --compress-algo=bzip2 --bzip2-compress-level=9 --output doc.txt.gpg --symmetric doc.txt Cheers, ``` Original tags: gpg",12 118020730,2014-08-24 01:42:11.678,"Duplicity fails with ""Invalid data - SHA1 hash mismatch for file"", does not retry download. (lp:#1360734)","[Original report](https://bugs.launchpad.net/bugs/1360734) created by **Eric O'Connor (oconnore)** ``` There appears to have been a corruption issue in my file download, about halfway through an 80GB restore. Invalid data - SHA1 hash mismatch for file: duplicity-full.20140720T050902Z.vol43.difftar.gpg Calculated hash: ee843c75152a61fd20c59e8cd3345288c0c52203 Manifest hash: 5971136422f4646cb06d722cf188bb04b7df810f When I checked the file hash on Amazon, it was exactly 5971136422f4646cb06d722cf188bb04b7df810f. I would expect duplicity to retry the download when the hash doesn't match. Duplicity version is: duplicity 0.6.24 Python: 2.7.8 Debian jessie/sid ```",18 118020728,2014-08-05 19:29:26.871,PAR2 not working with --archive-dir switch (lp:#1353066),"[Original report](https://bugs.launchpad.net/bugs/1353066) created by **holmz12 (holmz12-deactivatedaccount)** ``` I've upgraded to 0.6.24 and thought I would enable the new par2 wrapper backend. Unfortunately I think I've found a bug with it. My setup uses --archive-dir=somedir to place the cache files in a different location. With this it appears that duplicity only creates par2 files for the main volumes and fails to create a par2 file for either the signatures file or the manifest file. The result of this is when I try and do an incremental backup it says no such file when it tries to read the par2 signatures file. Timestamps of these files taken from when I was playing with this back in May, but the -v9 log below is from now. Without that switch: duplicity-full-signatures.20140527T171029Z.sigtar.gpg duplicity-full-signatures.20140527T171029Z.sigtar.gpg.par2 duplicity-full-signatures.20140527T171029Z.sigtar.gpg.vol000+188.par2 duplicity-full.20140527T171029Z.manifest.gpg duplicity-full.20140527T171029Z.manifest.gpg.par2 duplicity-full.20140527T171029Z.manifest.gpg.vol00+38.par2 duplicity-full.20140527T171029Z.vol1.difftar.gpg duplicity-full.20140527T171029Z.vol1.difftar.gpg.par2 duplicity-full.20140527T171029Z.vol1.difftar.gpg.vol000+198.par2 With that switch: duplicity-full-signatures.20140527T170909Z.sigtar.gpg duplicity-full.20140527T170909Z.manifest.gpg duplicity-full.20140527T170909Z.vol1.difftar.gpg duplicity-full.20140527T170909Z.vol1.difftar.gpg.par2 duplicity-full.20140527T170909Z.vol1.difftar.gpg.vol000+198.par2 Duplicity version 0.6.24, Python 2.7.8, FreeBSD 10.0-STABLE. My command line: duplicity full -v9 --allow-source-mismatch --archive-dir=archive --asynchronous-upload --encrypt-key=$ENCRYPTKEY --include-globbing- filelist=filelist --gpg-options=""--cipher-algo=AES256 --digest-algo=SHA512 --compress-algo=bzip2 --bzip2-compress-level=9"" --name=dropbox --sign- key=$SIGNKEY --volsize=30 / par2+file:///root/backup/dropbox/Backup The -v9 log shows that it doesn't attempt to write the .par2 files: Making directory archive/dropbox/duplicity_temp.1 Create Par2 recovery files Deleting archive/dropbox/duplicity_temp.1/duplicity-full- signatures.20140805T185 955Z.sigtar.gpg Writing /root/backup/dropbox/Backup/duplicity-full- signatures.20140805T185955Z.s igtar.gpg Deleting tree archive/dropbox/duplicity_temp.1 Selecting archive/dropbox/duplicity_temp.1 Deleting archive/dropbox/duplicity_temp.1 Deleting archive/dropbox/duplicity-full- signatures.20140805T185955Z.sigtar.gpg Making directory archive/dropbox/duplicity_temp.1 Create Par2 recovery files Deleting archive/dropbox/duplicity_temp.1/duplicity- full.20140805T185955Z.manife st.gpg Writing /root/backup/dropbox/Backup/duplicity- full.20140805T185955Z.manifest.gpg Deleting tree archive/dropbox/duplicity_temp.1 Selecting archive/dropbox/duplicity_temp.1 Deleting archive/dropbox/duplicity_temp.1 Deleting archive/dropbox/duplicity-full.20140805T185955Z.manifest.gpg Making directory /tmp/duplicity-fVl4Qm-tempdir/duplicity_temp.1 [Errno 2] No such file or directory: '/root/backup/dropbox/Backup/duplicity-ful$ .20140805T185955Z.manifest.gpg.par2' Releasing lockfile Removing still remembered temporary file /tmp/duplicity-fVl4Qm- tempdir/mktemp-LS aj4A-2 Removing still remembered temporary file /tmp/duplicity-fVl4Qm- tempdir/mkstemp-S Of3Zr-1 Cleanup of temporary directory /tmp/duplicity-fVl4Qm-tempdir failed - this is pr obably a bug. ``` Original tags: archive-dir par2",6 118020718,2014-07-30 15:13:42.876,cleanup results in huge process (lp:#1350404),"[Original report](https://bugs.launchpad.net/bugs/1350404) created by **Ian Chard (ian-chard)** ``` When I run duplicity with 'cleanup --extra-clean' on a large backup target, I get a 13GB process that runs forever (or at least for two hours, which is how long I ran it for). strace reveals that it doesn't do any I/O but spins on the CPU the whole time. Duplicity 0.6.24 Python 2.7.3 Debian Wheezy Here's the output of the process: # duplicity -v9 --archive-dir /data/duplicity-archive/ --gpg-options --homedir=/data/gpg-home/ --encrypt-key xxxxxxxx --s3-european-buckets --s3-use-new-style --asynchronous-upload --full-if-older-than 10D --allow- source-mismatch --num-retries 1 cleanup --extra-clean --force cf+http://vhost_backup_dir-mammoth_data_vhost_www.xyzxyz.com_photos Using archive dir: /data/duplicity-archive/b32efd07b4efc62f3a6f03722cbf64ae Using backup name: b32efd07b4efc62f3a6f03722cbf64ae Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Main action: cleanup ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/local/bin/duplicity -v9 --archive-dir /data/duplicity-archive/ --gpg-options --homedir=/data/gpg-home/ --encrypt-key xxxxxxxx --s3-european-buckets --s3-use-new-style --asynchronous-upload --full-if- older-than 10D --allow-source-mismatch --num-retries 1 cleanup --extra- clean --force cf+http://vhost_backup_dir- mammoth_data_vhost_www.xyzxyz.com_photos Linux leopard1 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u1 x86_64 /usr/bin/python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] ================================================================================ Listing '' Synchronizing remote metadata to local cache... Copying duplicity-full-signatures.20140703T113651Z.sigtar.gpg to local cache. Using temporary directory /tmp/duplicity-QZGsOA-tempdir Registering (mktemp) temporary file /tmp/duplicity-QZGsOA-tempdir/mktemp- _tPj6e-1 Downloading '/duplicity-full- signatures.20140703T113651Z.sigtar.gpg' [...hangs forever...] And here's the output of collection-status for this target: Local and Remote metadata are synchronized, no sync needed. Warning, found incomplete backup sets, probably left from aborted session Last full backup date: Sat Jul 26 10:24:43 2014 Collection Status ----------------- Connecting with backend: PyraxBackend Archive directory: /data/duplicity-archive/b32efd07b4efc62f3a6f03722cbf64ae Found 3 secondary backup chains. Secondary chain 1 of 3: ------------------------- Chain start time: Thu Jul 3 12:36:51 2014 Chain end time: Wed Jul 9 10:15:36 2014 Number of contained backup sets: 7 Total number of contained volumes: 5821  Type of backup set: Time: Number of volumes:                 Full Thu Jul 3 12:36:51 2014 2795          Incremental Fri Jul 4 09:35:39 2014 678          Incremental Sat Jul 5 09:18:24 2014 327          Incremental Sun Jul 6 09:36:55 2014 286          Incremental Mon Jul 7 09:21:16 2014 426          Incremental Tue Jul 8 10:40:53 2014 809          Incremental Wed Jul 9 10:15:36 2014 500 ------------------------- Secondary chain 2 of 3: ------------------------- Chain start time: Thu Jul 10 12:44:22 2014 Chain end time: Thu Jul 17 09:25:27 2014 Number of contained backup sets: 8 Total number of contained volumes: 8212  Type of backup set: Time: Number of volumes:                 Full Thu Jul 10 12:44:22 2014 2772          Incremental Fri Jul 11 10:08:27 2014 636          Incremental Sat Jul 12 09:11:38 2014 609          Incremental Sun Jul 13 09:23:05 2014 429          Incremental Mon Jul 14 09:58:30 2014 656          Incremental Tue Jul 15 11:11:28 2014 1504          Incremental Wed Jul 16 09:10:31 2014 948          Incremental Thu Jul 17 09:25:27 2014 658 ------------------------- Secondary chain 3 of 3: ------------------------- Chain start time: Fri Jul 18 10:29:29 2014 Chain end time: Fri Jul 25 09:21:46 2014 Number of contained backup sets: 8 Total number of contained volumes: 7746  Type of backup set: Time: Number of volumes:                 Full Fri Jul 18 10:29:29 2014 2888          Incremental Sat Jul 19 09:19:32 2014 531          Incremental Sun Jul 20 09:35:08 2014 580          Incremental Mon Jul 21 09:56:36 2014 599          Incremental Tue Jul 22 10:37:19 2014 765          Incremental Wed Jul 23 07:51:22 2014 638          Incremental Thu Jul 24 10:00:52 2014 901          Incremental Fri Jul 25 09:21:46 2014 844 ------------------------- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sat Jul 26 10:24:43 2014 Chain end time: Wed Jul 30 08:55:59 2014 Number of contained backup sets: 5 Total number of contained volumes: 5485  Type of backup set: Time: Number of volumes:                 Full Sat Jul 26 10:24:43 2014 2930          Incremental Sun Jul 27 08:56:41 2014 406          Incremental Mon Jul 28 08:39:07 2014 779          Incremental Tue Jul 29 09:29:27 2014 711          Incremental Wed Jul 30 08:55:59 2014 659 ------------------------- Also found 0 backup sets not part of any chain, and 8 incomplete backup sets. These may be deleted by running duplicity with the ""cleanup"" command. ```",6 118022957,2014-07-24 13:44:16.285,Duplicity fails during backup in with_tempdir() (lp:#1348193),"[Original report](https://bugs.launchpad.net/bugs/1348193) created by **shankao (shankao)** ``` Description: Ubuntu Utopic Unicorn (development branch) Release: 14.10 duplicity:   Installed: 0.6.23-1ubuntu5   Candidate: 0.6.23-1ubuntu5   Version table:  *** 0.6.23-1ubuntu5 0         500 http://au.archive.ubuntu.com/ubuntu/ utopic/main amd64 Packages         100 /var/lib/dpkg/status During a full system backup, using the deja-dup, the process gets about half-way through and then stops with the following error: Backup Failed Failed with an unknown error. Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1494, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1488, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1337, in main     do_backup(action)   File ""/usr/bin/duplicity"", line 1458, in do_backup     full_backup(col_stats)   File ""/usr/bin/duplicity"", line 542, in full_backup     globals.backend)   File ""/usr/bin/duplicity"", line 403, in write_multivol     globals.gpg_profile, globals.volsize)   File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 324, in GPGWriteFile     file = GPGFile(True, path.Path(filename), profile)   File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 105, in __init__     self.logger_fp = tempfile.TemporaryFile( dir=tempdir.default().dir() )   File ""/usr/lib/python2.7/tempfile.py"", line 497, in TemporaryFile     (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags)   File ""/usr/lib/python2.7/tempfile.py"", line 239, in _mkstemp_inner     fd = _os.open(file, flags, 0600) OSError: [Errno 2] No such file or directory: '/tmp/duplicity- DEgmK7-tempdir/tmplcuF1s' The referred to directory does not exist, however, the following one does exist: /tmp/duplicity-l4bYos-tempdir/ ```",26 118020714,2014-07-16 13:20:36.348,librsync signature collisions/preimages (lp:#1342721),"[Original report](https://bugs.launchpad.net/bugs/1342721) created by **mik (therealmik)** ``` Note: this vulnerability is already public: https://github.com/librsync/librsync/issues/5 Sending a copy to duplicity, because I found this when auditing duplicity. The signatures generated by librsync for duplicity are md4 sums truncated to 64 bits. Collision attacks: An attacker could try to get two different entries into a signature file, in case the first one to be seen will be the only one replicated in delta files. This has almost no complexity for md4, and even for a strong hash truncated to 8 bytes would only require approximately 2^32 hashes to be generated (birthday collision). Second-Preimage attacks: An attacker could compute a hash for existing (but known) data that they want replicated elsewhere, and generate a different block that will later be replaced with the original. The main precondition for these attacks is that attacker-controlled data needs to be written to a file then taken out of context after restore. This is probably not an issue for a majority of users, but this could have security ramifications for people backing up databases, VM images or even log files. Replacing md4 in librsync with a 256-bit hash (blake2b is fast, sha256 is standard) would be the best option, obviously requiring a change in the signature file format (but not the delta file format - so backups could still be restored using older versions). ```",260 118020706,2014-06-08 14:52:31.590,When incorrect ssh path provided coll-status always results in empty backup listing (lp:#1327792),"[Original report](https://bugs.launchpad.net/bugs/1327792) created by **Olivier Berger (oberger)** ``` Copying here a bug report already reported in Debian BTS, which I doubt is specific to Debian : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750893 Let's pretend I'm doing a collection-status on an existing backup dir with : $ duplicity collection-status --ssh-backend=pexpect --use-scp --ssh-askpass scp://root@aserver.local/mnt/backups/whatever Then, it will create a mnt/backups/whatever inside rooot's home on aserver.local which didn't exist already, instead of reporting an error (the URl should be scp://root@aserver.local//mnt/backups/whatever for /mnt/backups/whatever which does contain an existing backup). collection-status shouldn't do any directory creation and only report failure if the first dir of the path is missing. Instead, as exhibited with -v9, it iterates on creating alll dirs of the path, and then tries to list its contents, which most liely will be empty. Subsequent attempts by the user to debug will most likely too result in trying other duplicity options instead of fixing the wrong path, as there's nothing like 'dir mnt not found' like error. Thanks in advance. ```",6 118020689,2014-06-05 19:49:23.575,gpg wait thread broken (on some pythons) (lp:#1326944),"[Original report](https://bugs.launchpad.net/bugs/1326944) created by **Andrew Stubbs (ams-codesourcery)** ``` When duplicity launches GPG it uses a thread to contain the waitpid call, but this doesn't work on my machine. I have python 2.6 on an ARMv5te system, and I'm guessing this is somehow significant. The problem is, I think, that Python threading isn't true threading (in my case), and the wait thread blocks the main thread. The result is that it wait for GPG to exit before it has transmitted the passphrase or any of the data. This means that the whole process hangs up, and no backup happens. Anyway, whatever the precise cause, if I remove the whole thread, and replace the ""thread.join"" call with a direct call to waitpid, then all works as expected. In fact, the current use of waitpid appears to show a misunderstanding of how it works; for example, there's a debug log line that suggests it can fail if waitpid is called after the child has already exited, which I don't believe is true, in Linux. That said, perhaps there's an OS out there that works differently? ```",6 118020686,2014-06-04 17:51:51.724,--include option with range in [] fails if first item of the range has no match (lp:#1326472),"[Original report](https://bugs.launchpad.net/bugs/1326472) created by **Andrea Ballarati (ballarati)** ``` duplicity 0.6.18 Python 2.7.3 DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=""Ubuntu 12.04.4 LTS"" target filesystem Linux ext4 command line: -v8 --ssh-askpass --volsize 2000 --no-encryption --asynchronous-upload --log-file /var/log/backupscript.log --exclude **/temp/** --exclude **.tmp --exclude **.bkp --include ignorecase:/srv/pub/rete/[q..z]** --exclude /srv/pub/rete /srv/pub/rete sftp://bkpusr@192.168.0.249//mnt/archivio/backup/ This command line fails (no files are backed up) since, I think, there is no directory that begins with ""q"". If I write instead /[qrstuwxyz]** backup works as expected (i.e. starts from the directories that begins with ""r""). The same command line works if I write /[r..z]** ```",6 118020679,2014-05-30 08:53:17.996,No support of long key_id or full fingerprint (lp:#1324819),"[Original report](https://bugs.launchpad.net/bugs/1324819) created by **4dro (kwadronaut)** ``` Tried using the full fingerprint as recommended by gnupg: 'The use of key Ids is just a shortcut, for all automated processing the fingerprint should be used. ' [1] But that failed, same for using the long keyid. Error: Sign key should be an 8 character hex string, like 'AA0E73D2'. Error: Received '60DEADBEEF29E89B' instead. Relying on the short keyid is a bad idea, they can be easily spoofed [2]. Long keyids can also relatively easy collide [3] and they can cause serious side effects. [4] I haven't tested what happens when there are colliding fingerprints. It'd be nice if we can make use of full fingerprints. I have tried to look at where the looks of a key are tested but got a bit lost. [1] https://www.gnupg.org/documentation/manuals/gnupg-devel/Specify-a-User- ID.html [2] http://www.asheesh.org/note/debian/short-key-ids-are-bad-news [3] http://thread.gmane.org/gmane.ietf.openpgp/7413 [4] https://www.debian-administration.org/users/dkg/weblog/105 ```",10 118020677,2014-05-23 03:32:01.695,Wishlist: Ability to fund development (lp:#1322411),"[Original report](https://bugs.launchpad.net/bugs/1322411) created by **Ringo Kamens (ringokamens-deactivatedaccount)** ``` I looked on the Duplicity website but was unable to find any place to donate money to fund development. Can we get this added to the site/manpage/something? It would be great if a bug bounty program would be opened, I use duplicity on one of my servers and would gladly contribute to bugfixed and features. ```",6 118020676,2014-05-14 20:39:55.010,Giving up after 5 attempts. BackendException: Bad status code 200 reason OK. (lp:#1319557),"[Original report](https://bugs.launchpad.net/bugs/1319557) created by **L0RE (andreas-vogler)** ``` When i make a Backup With Duply (Duplicity). It works Using installed duplicity version 0.6.21, python 2.7.5+, gpg 1.4.14 (Home: ~/.gnupg), awk 'mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan', bash '4.2.45(1)-release (x86_64-pc-linux-gnu)'. Autoset found secret key of first GPG_KEY entry 'XXXXXX' for signing. Test - Encrypt to 88E30CC3 & Sign with XXXXX (OK) Test - Decrypt (OK) Test - Compare (OK) Cleanup - Delete '/tmp/duply.17059.1400099642_*'(OK) --- Start running command PRE at 22:34:03.210 --- Skipping n/a script '/root/.duply/www/pre'. --- Finished state OK at 22:34:03.223 - Runtime 00:00:00.012 --- --- Start running command BKP at 22:34:03.234 --- Using archive dir: /root/.cache/duplicity/duply_www Using backup name: duply_www Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.localbackend Succeeded Using WebDAV protocol http Using WebDAV host webdav.opendrive.com port None Using WebDAV directory /web-server/ Reading globbing filelist /root/.duply/www/exclude Main action: inc ================================================================================ duplicity 0.6.21 (January 23, 2013) Args: /usr/bin/duplicity --name duply_NAME --encrypt-keyXXXX --sign-key XXXX --verbosity 5 -v5 --exclude-globbing-filelist /root/.duply/NAME/exclude / webdavs://USER@HOST/DIR Linux SERVERNAME 3.11.0-20-generic #35-Ubuntu SMP Fri May 2 21:32:49 UTC 2014 x86_64 x86_64 /usr/bin/python 2.7.5+ (default, Feb 27 2014, 19:37:08) [GCC 4.8.1] ================================================================================ UNTIL: WebDAV PUT /web-server/duplicity-full.20140514T144621Z.vol83.difftar.gpg request with headers: {'Connection': 'keep-alive', 'Authorization': 'Basic STRING'} WebDAV data length: 26271796 WebDAV response status 200 with reason 'OK'. Attempt 1 failed. BackendException: Bad status code 200 reason OK. This goes vor 5 TImes: Giving up after 5 attempts. BackendException: Bad status code 200 reason OK. 22:37:38.770 Task 'BKP' failed with exit code '50'. --- Finished state FAILED 'code 50' at 22:37:38.770 - Runtime 00:03:35.535 --- --- Start running command POST at 22:37:38.787 --- Skipping n/a script '/root/.duply/www/post'. --- Finished state OK at 22:37:38.797 - Runtime 00:00:00.010 --- ```",6 118020645,2014-05-12 22:14:43.288,"""No such file or directory"" during backup (lp:#1318833)","[Original report](https://bugs.launchpad.net/bugs/1318833) created by **Michael Terry (mterry)** ``` I have an automatic backup system setup running through a script in anacron. After upgrading to 14.04 duplicity fails with the following error (when run from termina) Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1458, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 542, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 403, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 327, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 320, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity-Hy90rz- tempdir/mktemp-SB_XJ9-2' I am running 14.04 fully updated and duplicity: Installed: 0.6.23-1ubuntu4 Candidate: 0.6.23-1ubuntu4 Version table: *** 0.6.23-1ubuntu4 0 500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status Any help? ```",86 118019316,2014-05-03 17:17:31.607,Add tilde expansion in file selection (lp:#1315715),"[Original report](https://bugs.launchpad.net/bugs/1315715) created by **Jon Black (juan-black)** ``` I'm want to backup my home folder but exclude all hidden files and folder. If I try the option `--exclude .*` I get the error: Fatal Error: The file specification     .* cannot match any files in the base directory     /home/jon However, if I use the option `--exclude /home/jon/.*` it works fine. This is very contradicting. duplicity version: 0.6.23 ```",6 118020644,2014-05-03 05:10:50.084,Registering (mktemp) temporary file failed (lp:#1315594),"[Original report](https://bugs.launchpad.net/bugs/1315594) created by **Alexey (alisitskiy)** ``` update duplicity to 982 pip install http://bazaar.launchpad.net/~duplicity- team/duplicity/0.6-series/tarball/982 2014-05-03 02:15:01 START BACKUP Using archive dir: /backup/amazon/archive/photosignbackup Using backup name: photosignbackup Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.cloudfilesbackend Failed: the scheme cf+http already has a backend associated with it Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: inc ================================================================================ duplicity $version ($reldate) Args: /usr/bin/duplicity -v9 --s3-unencrypted-connection --s3-use-new-style --s3-european-buckets --volsize 100 --tempdir /backup/amazon/tmp --archive- dir=/backup/amazon/archive --name=photosignbackup --full-if-older-than 180D /var/www/photositeuser/data/data s3+http://phototreasuredatabackupsphotos Linux 1045-1.ru 2.6.32-042stab084.17 #1 SMP Fri Dec 27 17:10:20 MSK 2013 i686 i686 /usr/bin/python 2.6.6 (r266:84292, Jan 22 2014, 09:37:14) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ================================================================================ Using temporary directory /backup/amazon/tmp/duplicity-sCUyW4-tempdir Registering (mkstemp) temporary file /backup/amazon/tmp/duplicity- sCUyW4-tempdir/mkstemp-L_QHVv-1 Temp has 61136969728 available, backup will use approx 136314880. ........... Connecting with backend: BackendWrapper Archive dir: /backup/amazon/archive/photosignbackup Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sun Jan 12 02:16:14 2014 Chain end time: Sun Mar 9 02:15:03 2014 Number of contained backup sets: 46 Total number of contained volumes: 126 .............................. No orphaned or incomplete backup sets found. Registering (mktemp) temporary file /backup/amazon/tmp/duplicity- sCUyW4-tempdir/mktemp-tykmgk-2 Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.6/site-packages/duplicity/backend.py"", line 363, in inner_retry return fn(self, *args) File ""/usr/lib/python2.6/site-packages/duplicity/backend.py"", line 539, in get ""from backend"") % util.ufn(local_path.name)) BackendException: File /backup/amazon/tmp/duplicity-sCUyW4-tempdir/mktemp- tykmgk-2 not found locally after get from backend Attempt 1 failed. BackendException: File /backup/amazon/tmp/duplicity- sCUyW4-tempdir/mktemp-tykmgk-2 not found locally after get from backend Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.6/site-packages/duplicity/backend.py"", line 363, in inner_retry return fn(self, *args) File ""/usr/lib/python2.6/site-packages/duplicity/backend.py"", line 539, in get ""from backend"") % util.ufn(local_path.name)) BackendException: File /backup/amazon/tmp/duplicity-sCUyW4-tempdir/mktemp- tykmgk-2 not found locally after get from backend Attempt 2 failed. ........................ Releasing lockfile Removing still remembered temporary file /backup/amazon/tmp/duplicity- sCUyW4-tempdir/mktemp-tykmgk-2 Removing still remembered temporary file /backup/amazon/tmp/duplicity- sCUyW4-tempdir/mkstemp-L_QHVv-1 2014-05-03 02:17:05 BACKUP FINISHED AVERAGE BACKUP FILE PROCESSING SPEED WAS 0.00 KB\s ``` Original tags: temp",10 118020638,2014-05-01 12:40:40.622,"deja duplicity fails, backup and restore, unknown error (lp:#1314984)","[Original report](https://bugs.launchpad.net/bugs/1314984) created by **dancing (dancingmusic)** ``` unknown error upon either backup or restore Ubuntu 13.10, Linux ext4 target, restore to Linux external ext3 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1434, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1428, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1274, in main globals.lockfile.acquire(timeout = 0) File ""/usr/lib/python2.7/dist-packages/lockfile.py"", line 239, in acquire raise LockFailed(""failed to create %s"" % self.unique_name) LockFailed: failed to create /home/a/.cache/deja- dup/044e94704d3a9bd7c32d653e5dfdbbc0/star.MainThread-6825 Tried uninstalling deja dup and reinstalling, but there were dependency problems and neither worked from Software or Synaptic I don't know how to tell the version, or how to run it in a terminal, so I don't know how to get the -v9 option, but will be happy to do so with instructions. History: Did a successful backup and restore of Home, as myself. Did a backup of / as myself (couldn't figure out how to run it as root) to a different directory, which had an error near the end of the process, but was able to restore some of the files anyway (which I deleted later, figuring I'd do it over), but now the above error message occurs on either backup or restore. There were two screens saying I had to authenticate to run ~bin.sh. ``` Original tags: backup deja-dup duplicity error unknown",16 118020623,2014-04-28 21:45:17.443,"Restoring fails with ""GError: Error setting modification or access time: No such file or directory"" (lp:#1313944)","[Original report](https://bugs.launchpad.net/bugs/1313944) created by **Oehm (oehmannemuere)** ``` I am trying to restore a 2TB dejadup backup over smb using dulicity after two failed dejadup attempts. The backup of my home directory was done with ubuntu 12.04 on a NAS (btrfs) in my LAN. After a complete re-install and upgrade to ubuntu 14.04 I am trying to restore the data to an internal harddrive (ext4). The command I used: > duplicity --gio --verbosity=9 smb://USER@NAS.local/mybackup /media/LOCALUSER/DRIVENAME/restoretarget After ~60GB of successfully restored data duplicity stopped (and hung) after the output below. Even though the last line suggests that ""duplicity-uU6dfg-tempdir"" could not be removed I was not able to locate it anywhere. [...] Processed volume 2295 of 64378 Registering (mktemp) temporary file /tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299 Writing /tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299 Attempt 1 failed: GError: Error setting modification or access time: No such file or directory Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 137, in copy_file target.get_parse_name()) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 112, in handle_error raise e GError: Error setting modification or access time: No such file or directory Writing /tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299 Attempt 2 failed: GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 137, in copy_file target.get_parse_name()) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 112, in handle_error raise e GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Writing /tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299 Attempt 3 failed: GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 137, in copy_file target.get_parse_name()) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 112, in handle_error raise e GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Writing /tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299 Attempt 4 failed: GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 318, in iterate return fn(*args, **kwargs) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 137, in copy_file target.get_parse_name()) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/giobackend.py"", line 112, in handle_error raise e GError: Error opening file '/tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299': No such file or directory Writing /tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299 Error opening file '/tmp/duplicity-uU6dfg-tempdir/mktemp-9WVdJO-2299': No such file or directory Releasing lockfile Removing still remembered temporary file /tmp/duplicity-uU6dfg- tempdir/mktemp-rkv29i-1736 Removing still remembered temporary file /tmp/duplicity-uU6dfg- tempdir/mktemp-ngak_K-1738 Removing still remembered temporary file /tmp/duplicity-uU6dfg- tempdir/mktemp-9WVdJO-2299 Removing still remembered temporary file /tmp/duplicity-uU6dfg- tempdir/mkstemp-z02uUj-1 Cleanup of temporary directory /tmp/duplicity-uU6dfg-tempdir failed - this is probably a bug. ```",6 118020619,2014-04-27 18:42:21.805,Exception when trying to back up: ValueError: invalid literal for int() (lp:#1313413),"[Original report](https://bugs.launchpad.net/bugs/1313413) created by **Dmitry (shintyakov)** ``` Duplicity version: 0.6.23 Python version: 2.7.6 System: arch linux Previously I successfully backed up my system with duplicity, but now attampt to launch gives mw stack trace with exception: Last lines of the log: Added incremental Backupset (start_time: Sun Sep 23 18:13:44 2012 / end_time: Sun Sep 30 21:37:16 2012) Added set Sun Sep 30 21:37:16 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sun Sep 30 21:37:16 2012] Added incremental Backupset (start_time: Sun Sep 30 21:37:16 2012 / end_time: Sat Oct 20 13:43:12 2012) Added set Sat Oct 20 13:43:12 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Oct 20 13:43:12 2012] Added incremental Backupset (start_time: Sat Oct 20 13:43:12 2012 / end_time: Sat Oct 27 15:23:55 2012) Added set Sat Oct 27 15:23:55 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Oct 27 15:23:55 2012] Added incremental Backupset (start_time: Sat Oct 27 15:23:55 2012 / end_time: Mon Nov 5 13:20:41 2012) Added set Mon Nov 5 13:20:41 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Mon Nov 5 13:20:41 2012] Added incremental Backupset (start_time: Mon Nov 5 13:20:41 2012 / end_time: Sat Nov 10 23:52:07 2012) Added set Sat Nov 10 23:52:07 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Nov 10 23:52:07 2012] Added incremental Backupset (start_time: Sat Nov 10 23:52:07 2012 / end_time: Sat Nov 17 21:15:40 2012) Added set Sat Nov 17 21:15:40 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Nov 17 21:15:40 2012] Added incremental Backupset (start_time: Sat Nov 17 21:15:40 2012 / end_time: Sat Dec 1 20:38:29 2012) Added set Sat Dec 1 20:38:29 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Dec 1 20:38:29 2012] Added incremental Backupset (start_time: Sat Dec 1 20:38:29 2012 / end_time: Fri Dec 14 23:36:22 2012) Added set Fri Dec 14 23:36:22 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Fri Dec 14 23:36:22 2012] Added incremental Backupset (start_time: Fri Dec 14 23:36:22 2012 / end_time: Thu Dec 27 23:44:18 2012) Added set Thu Dec 27 23:44:18 2012 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Thu Dec 27 23:44:18 2012] Added incremental Backupset (start_time: Thu Dec 27 23:44:18 2012 / end_time: Sat Jan 5 13:58:17 2013) Added set Sat Jan 5 13:58:17 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Jan 5 13:58:17 2013] Added incremental Backupset (start_time: Sat Jan 5 13:58:17 2013 / end_time: Sun Jan 27 01:36:32 2013) Added set Sun Jan 27 01:36:32 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sun Jan 27 01:36:32 2013] Added incremental Backupset (start_time: Sun Jan 27 01:36:32 2013 / end_time: Wed Feb 13 21:41:00 2013) Added set Wed Feb 13 21:41:00 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Wed Feb 13 21:41:00 2013] Added incremental Backupset (start_time: Wed Feb 13 21:41:00 2013 / end_time: Tue Mar 5 00:01:58 2013) Added set Tue Mar 5 00:01:58 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Tue Mar 5 00:01:58 2013] Added incremental Backupset (start_time: Tue Mar 5 00:01:58 2013 / end_time: Sat Mar 16 22:07:06 2013) Added set Sat Mar 16 22:07:06 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Mar 16 22:07:06 2013] Added incremental Backupset (start_time: Sat Mar 16 22:07:06 2013 / end_time: Sat Mar 23 21:58:49 2013) Added set Sat Mar 23 21:58:49 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Mar 23 21:58:49 2013] Added incremental Backupset (start_time: Sat Mar 23 21:58:49 2013 / end_time: Sun Mar 31 11:44:40 2013) Added set Sun Mar 31 11:44:40 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sun Mar 31 11:44:40 2013] Added incremental Backupset (start_time: Sun Mar 31 11:44:40 2013 / end_time: Sat Apr 20 23:22:58 2013) Added set Sat Apr 20 23:22:58 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Apr 20 23:22:58 2013] Added incremental Backupset (start_time: Sat Apr 20 23:22:58 2013 / end_time: Tue Apr 30 23:03:56 2013) Added set Tue Apr 30 23:03:56 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Tue Apr 30 23:03:56 2013] Added incremental Backupset (start_time: Tue Apr 30 23:03:56 2013 / end_time: Sun May 26 17:01:08 2013) Added set Sun May 26 17:01:08 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sun May 26 17:01:08 2013] Added incremental Backupset (start_time: Sun May 26 17:01:08 2013 / end_time: Sat Jun 8 23:25:14 2013) Added set Sat Jun 8 23:25:14 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sat Jun 8 23:25:14 2013] Added incremental Backupset (start_time: Sat Jun 8 23:25:14 2013 / end_time: Sun Jun 23 20:52:14 2013) Added set Sun Jun 23 20:52:14 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Sun Jun 23 20:52:14 2013] Added incremental Backupset (start_time: Sun Jun 23 20:52:14 2013 / end_time: Fri Jul 12 22:48:15 2013) Added set Fri Jul 12 22:48:15 2013 to pre-existing chain [Sun Feb 19 18:23:59 2012]-[Fri Jul 12 22:48:15 2013] Found backup chain [Mon Oct 14 23:36:31 2013]-[Mon Oct 14 23:36:31 2013] Ignoring incremental Backupset (start_time: Mon Oct 14 23:36:31 2013; needed: Fri Jul 12 22:48:15 2013) Added incremental Backupset (start_time: Mon Oct 14 23:36:31 2013 / end_time: Sat Dec 28 22:03:53 2013) Added set Sat Dec 28 22:03:53 2013 to pre-existing chain [Mon Oct 14 23:36:31 2013]-[Sat Dec 28 22:03:53 2013] Ignoring incremental Backupset (start_time: Sat Dec 28 22:03:53 2013; needed: Fri Jul 12 22:48:15 2013) Added incremental Backupset (start_time: Sat Dec 28 22:03:53 2013 / end_time: Sat Jan 11 19:02:38 2014) Added set Sat Jan 11 19:02:38 2014 to pre-existing chain [Mon Oct 14 23:36:31 2013]-[Sat Jan 11 19:02:38 2014] Ignoring incremental Backupset (start_time: Sat Jan 11 19:02:38 2014; needed: Fri Jul 12 22:48:15 2013) Added incremental Backupset (start_time: Sat Jan 11 19:02:38 2014 / end_time: Sat Feb 8 16:13:20 2014) Added set Sat Feb 8 16:13:20 2014 to pre-existing chain [Mon Oct 14 23:36:31 2013]-[Sat Feb 8 16:13:20 2014] Found backup chain [Sat Feb 15 12:34:57 2014]-[Sat Feb 15 12:34:57 2014] Last full backup date: Sat Feb 15 12:34:57 2014 Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /home/myname/.cache/duplicity/c3e6120fe7300af2676b1800e9af401f Found 2 secondary backup chains. Secondary chain 1 of 2: ------------------------- Chain start time: Sun Feb 19 18:23:59 2012 Chain end time: Fri Jul 12 22:48:15 2013 Number of contained backup sets: 33 Total number of contained volumes: 1249 Type of backup set: Time: Num volumes: Full Sun Feb 19 18:23:59 2012 527 Incremental Sun Feb 19 20:11:05 2012 1 Incremental Thu Mar 8 18:18:57 2012 11 Incremental Sat May 12 23:15:20 2012 10 Incremental Sun May 27 16:02:41 2012 1 Incremental Tue Jun 12 16:39:22 2012 1 Incremental Sun Aug 5 15:22:21 2012 36 Incremental Sat Aug 11 20:26:06 2012 1 Incremental Sun Aug 26 02:22:03 2012 15 Incremental Thu Sep 6 23:52:09 2012 7 Incremental Sun Sep 23 18:13:44 2012 59 Incremental Sun Sep 30 21:37:16 2012 17 Incremental Sat Oct 20 13:43:12 2012 4 Incremental Sat Oct 27 15:23:55 2012 1 Incremental Mon Nov 5 13:20:41 2012 95 Incremental Sat Nov 10 23:52:07 2012 8 Incremental Sat Nov 17 21:15:40 2012 33 Incremental Sat Dec 1 20:38:29 2012 26 Incremental Fri Dec 14 23:36:22 2012 104 Incremental Thu Dec 27 23:44:18 2012 1 Incremental Sat Jan 5 13:58:17 2013 2 Incremental Sun Jan 27 01:36:32 2013 133 Incremental Wed Feb 13 21:41:00 2013 1 Incremental Tue Mar 5 00:01:58 2013 2 Incremental Sat Mar 16 22:07:06 2013 3 Incremental Sat Mar 23 21:58:49 2013 2 Incremental Sun Mar 31 11:44:40 2013 1 Incremental Sat Apr 20 23:22:58 2013 3 Incremental Tue Apr 30 23:03:56 2013 134 Incremental Sun May 26 17:01:08 2013 5 Incremental Sat Jun 8 23:25:14 2013 2 Incremental Sun Jun 23 20:52:14 2013 1 Incremental Fri Jul 12 22:48:15 2013 2 ------------------------- Secondary chain 2 of 2: ------------------------- Chain start time: Mon Oct 14 23:36:31 2013 Chain end time: Sat Feb 8 16:13:20 2014 Number of contained backup sets: 4 Total number of contained volumes: 749 Type of backup set: Time: Num volumes: Full Mon Oct 14 23:36:31 2013 716 Incremental Sat Dec 28 22:03:53 2013 28 Incremental Sat Jan 11 19:02:38 2014 3 Incremental Sat Feb 8 16:13:20 2014 2 ------------------------- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sat Feb 15 12:34:57 2014 Chain end time: Sat Feb 15 12:34:57 2014 Number of contained backup sets: 1 Total number of contained volumes: 702 Type of backup set: Time: Num volumes: Full Sat Feb 15 12:34:57 2014 702 ------------------------- No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: PASSPHRASE variable not set, asking user. Retype passphrase to confirm: PASSPHRASE variable not set, asking user. Registering (mktemp) temporary file /tmp/duplicity-aAlEHC- tempdir/mktemp-J5v_9q-2 Deleting /tmp/duplicity-aAlEHC-tempdir/mktemp-J5v_9q-2 Forgetting temporary file /tmp/duplicity-aAlEHC-tempdir/mktemp-J5v_9q-2 Releasing lockfile Removing still remembered temporary file /tmp/duplicity-aAlEHC- tempdir/mkstemp-f_Fx10-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1493, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1487, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1336, in main do_backup(action) File ""/usr/bin/duplicity"", line 1468, in do_backup check_last_manifest(col_stats) # not needed for full backup File ""/usr/bin/duplicity"", line 1169, in check_last_manifest last_backup_set.check_manifests() File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 190, in check_manifests remote_manifest = self.get_remote_manifest() File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 232, in get_remote_manifest return manifest.Manifest().from_string(manifest_buffer) File ""/usr/lib/python2.7/site-packages/duplicity/manifest.py"", line 186, in from_string vi = VolumeInfo().from_string(match.group(2)) File ""/usr/lib/python2.7/site-packages/duplicity/manifest.py"", line 377, in from_string self.end_block = int(other_fields[1]) ValueError: invalid literal for int() with base 10: '10Volume' Command line, used to run duplicity: duplicity --full-if-older-than 4M --allow-source-mismatch\ ... some include/exclude directives.... /home/myname \ file:///mnt/nfs/mybook/Backups/Home ```",6 118020614,2014-04-17 21:56:20.849,Cleanup of temporary directory failed (lp:#1309224),"[Original report](https://bugs.launchpad.net/bugs/1309224) created by **Tom Slominski (tomslominski)** ``` After trying to perform an incremental backup of a whole system, the backup has failed. This happened after trying to backup a large 41GB virtual machine disk. Doing this took several hours, and then the backup errored. Duplicity version: 0.6.21-0ubuntu4.2 Python version: 2.7.5-5ubuntu1 OS Distro and version: Ubuntu 13.10 Type of target filesystem: btrfs external hard drive Last lines of the verbose command line output before and up to the error: Getting delta of (('VirtualBox VMs', 'Ubuntu', 'Ubuntu.vdi') /home/tom/VirtualBox VMs/Ubuntu/Ubuntu.vdi reg) and (('VirtualBox VMs', 'Ubuntu', 'Ubuntu.vdi') reg) M VirtualBox VMs/Ubuntu/Ubuntu.vdi Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit Comparing ('VirtualBox VMs', 'Windows 7 64 bit') and ('VirtualBox VMs', 'Windows 7 64 bit') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit') /home/tom/VirtualBox VMs/Windows 7 64 bit dir) and (('VirtualBox VMs', 'Windows 7 64 bit') dir) A VirtualBox VMs/Windows 7 64 bit Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs') and ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs') /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs dir) and (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs') dir) A VirtualBox VMs/Windows 7 64 bit/Logs Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log') and ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log') /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log reg) and (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log') reg) M VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.1 Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.1') and None Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.1') /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.1 reg) and None A VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.1 Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.2 Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.2') and None Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.2') /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.2 reg) and None A VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.2 Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.3 Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.3') and None Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Logs', 'VBox.log.3') /home/tom/VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.3 reg) and None A VirtualBox VMs/Windows 7 64 bit/Logs/VBox.log.3 Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Snapshots Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Snapshots') and None Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Snapshots') /home/tom/VirtualBox VMs/Windows 7 64 bit/Snapshots dir) and None A VirtualBox VMs/Windows 7 64 bit/Snapshots Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox') and ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox') /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox reg) and (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox') reg) M VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox- prev Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox- prev') and ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox- prev') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox-prev') /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox-prev reg) and (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vbox-prev') reg) M VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vbox-prev Selecting /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vdi Comparing ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vdi') and ('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vdi') Getting delta of (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vdi') /home/tom/VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vdi reg) and (('VirtualBox VMs', 'Windows 7 64 bit', 'Windows 7 64 bit.vdi') reg) M VirtualBox VMs/Windows 7 64 bit/Windows 7 64 bit.vdi AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-74MS55-tempdir/mktemp- jbJtS6-3 Releasing lockfile Removing still remembered temporary file /tmp/duplicity-74MS55-tempdir/mktemp-jbJtS6-3 Removing still remembered temporary file /tmp/duplicity-74MS55-tempdir/mkstemp-zYZHbk-1 Cleanup of temporary directory /tmp/duplicity-74MS55-tempdir failed - this is probably a bug. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1434, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1428, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1277, in main do_backup(action) File ""/usr/bin/duplicity"", line 1410, in do_backup incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 586, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 391, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 324, in GPGWriteFile file = GPGFile(True, path.Path(filename), profile) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 105, in __init__ self.logger_fp = tempfile.TemporaryFile( dir=tempdir.default().dir() ) File ""/usr/lib/python2.7/tempfile.py"", line 493, in TemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags) File ""/usr/lib/python2.7/tempfile.py"", line 239, in _mkstemp_inner fd = _os.open(file, flags, 0600) OSError: [Errno 2] No such file or directory: '/tmp/duplicity-74MS55-tempdir/tmpqYcXYQ' Command line input: duplicity incr --exclude-filelist=/media/tom/Linux\ backups/gazelle/tom/exclude -v 9 /home/tom file:///media/tom/Linux\ backups/gazelle/tom After that, the backup proceeded as normal. ```",16 118018946,2014-04-10 20:19:23.885,Old backups are not deleted on Google Drive (lp:#1306242),"[Original report](https://bugs.launchpad.net/bugs/1306242) created by **Nichlas (nichlas-hummelsberger)** ``` I use version 0.6.23 and backup my files to Google Drive with the gdocs command. My problem is that duplicity don't delete old full backups when i ask it to cleanup. Instead all the files end up as files with no category/folder, and are still taking up a lot of space - and because there are a large number of files it is not trivial to find them and delete them (this is mostly Google's fault, as it is hard to finde files with no folder, that also don't belong in the top folder). The only other mention i found about this was this mail on the mailing list: https://lists.nongnu.org/archive/html/duplicity- talk/2013-03/msg00024.html that sadly never got any answer. ``` Original tags: gdocs",58 118020613,2014-04-03 20:25:15.409,restore file to a folder (lp:#1302161),"[Original report](https://bugs.launchpad.net/bugs/1302161) created by **Steven Barre (slashterix)** ``` if --files-to-restore is a single file, target_dir will be treated as a filename. if target_dir exists but is empty, it is first deleted then replaced with a file of the same name. There appears to be no way to restore a single file to a folder. I'm building a wrapper, so I don't know if --files-to-restore is a single file or a folder, else I could just append the basename to my target_dir ```",6 118020610,2014-04-03 20:09:39.007,collection-status should list dates in a format accepted by restore -t (lp:#1302151),"[Original report](https://bugs.launchpad.net/bugs/1302151) created by **Steven Barre (slashterix)** ``` The date/time printed out by collection-status are not acceptable for -t on restore. This means wrapping tools need to parse and transform the date. Would also be nice if the collection-status could optionally be returned as XML/JSON or some other markup for easier parsing. duplicity 0.6.21 (January 23, 2013) Linux XXX 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 /usr/bin/python 2.6.6 (r266:84292, Nov 22 2013, 12:16:22) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] CentOS 6.5 ```",10 118020608,2014-03-20 17:23:37.843,Duplicity restore crashes on s3 glacier transitioned files (lp:#1295260),"[Original report](https://bugs.launchpad.net/bugs/1295260) created by **Matt Thompson (chameleonator)** ``` When we do our monthly testing of backups stored on s3, duplicity crashes out on transitioned glacier files that are still listed from the bucket. We're working around it by not listing the files in a patch but if anyone has a better idea I'd be happy to implement it. I've linked our workaround. Here's our system info: Ubuntu 12.04.4 LTS Python 2.7.3 duplicity 0.6.20 boto 2.4.1 Here's an example restore that crashes out: Using archive dir: /mnt/cache2/e415c3e30cf174779e9311f3679408ac Using backup name: e415c3e30cf174779e9311f3679408ac Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.20 (October 28, 2012) Args: /usr/local/bin/duplicity restore --timeout=300 --verbosity=8 --archive-dir=/mnt/cache2 --no-encryption s3+http://6087053278/uploads ./uploads-restored2 -v9 Linux madison-csm2os-Leader-bpBTXTxy 3.2.0-54-virtual #82-Ubuntu SMP Tue Sep 10 20:31:18 UTC 2013 x86_64 x86_64 /usr/bin/python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] ================================================================================ Using temporary directory /tmp/duplicity-pSmd0h-tempdir Registering (mkstemp) temporary file /tmp/duplicity- pSmd0h-tempdir/mkstemp-Y2arEz-1 Temp has 7238836224 available, backup will use approx 34078720. Listing s3+http://6087053278/uploads Listed s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz ... SNIP ... Download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz failed (attempt #1, reason: S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage classAAC655EFF3529702S3TPLPY2RmB9GeGNEKYVTxxS4R+tO+OrW+GIPRHqpzG3IsIDHEKj6ffpLCeT8HMB) Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 241, in get key.get_contents_to_filename(local_path.name) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1367, in get_contents_to_filename response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1314, in get_contents_to_file response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1190, in get_file override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 218, in open override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 170, in open_read self.resp.reason, body) S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage classAAC655EFF3529702S3TPLPY2RmB9GeGNEKYVTxxS4R+tO+OrW+GIPRHqpzG3IsIDHEKj6ffpLCeT8HMB Downloading s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz Download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz failed (attempt #2, reason: AttributeError: 'NoneType' object has no attribute 'has_key') Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 241, in get key.get_contents_to_filename(local_path.name) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1367, in get_contents_to_filename response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1314, in get_contents_to_file response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1226, in get_file if self.size is None and not torrent and not headers.has_key(""Range""): AttributeError: 'NoneType' object has no attribute 'has_key' Downloading s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz Download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz failed (attempt #3, reason: S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage class05BFAA5798376DAEbi8dKmIPHCg1yqdflcNlb/R/gGDNkZO9dCtM9j4h/CJQ5XADn4k9c+Ew7xUCbXD1) Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 241, in get key.get_contents_to_filename(local_path.name) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1367, in get_contents_to_filename response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1314, in get_contents_to_file response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1190, in get_file override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 218, in open override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 170, in open_read self.resp.reason, body) S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage class05BFAA5798376DAEbi8dKmIPHCg1yqdflcNlb/R/gGDNkZO9dCtM9j4h/CJQ5XADn4k9c+Ew7xUCbXD1 Downloading s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz Download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz failed (attempt #4, reason: AttributeError: 'NoneType' object has no attribute 'has_key') Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 241, in get key.get_contents_to_filename(local_path.name) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1367, in get_contents_to_filename response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1314, in get_contents_to_file response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1226, in get_file if self.size is None and not torrent and not headers.has_key(""Range""): AttributeError: 'NoneType' object has no attribute 'has_key' Downloading s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz Download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz failed (attempt #5, reason: S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage class8FDBAFB522AD2FF8MdYu9030Wmww1SBIbASAIBY9KzckLIwCd6ZxZpAxHsh4lehSLKVhifF28F9vZw8c) Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 241, in get key.get_contents_to_filename(local_path.name) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1367, in get_contents_to_filename response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1314, in get_contents_to_file response_headers=response_headers) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1190, in get_file override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 218, in open override_num_retries=override_num_retries) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 170, in open_read self.resp.reason, body) S3ResponseError: S3ResponseError: 403 Forbidden InvalidObjectStateThe operation is not valid for the object's storage class8FDBAFB522AD2FF8MdYu9030Wmww1SBIbASAIBY9KzckLIwCd6ZxZpAxHsh4lehSLKVhifF28F9vZw8c Giving up trying to download s3+http://6087053278/uploads/duplicity-full- signatures.20121108T103500Z.sigtar.gz after 5 attempts Removing still remembered temporary file /tmp/duplicity- pSmd0h-tempdir/mktemp-7lBmhI-2 Removing still remembered temporary file /tmp/duplicity- pSmd0h-tempdir/mkstemp-Y2arEz-1 Backend error detail: Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1403, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1396, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1272, in main sync_archive(decrypt) File ""/usr/local/bin/duplicity"", line 1072, in sync_archive copy_to_local(fn) File ""/usr/local/bin/duplicity"", line 1013, in copy_to_local fileobj = globals.backend.get_fileobj_read(fn) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 555, in get_fileobj_read self.get(filename, tdp) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 256, in get raise BackendException(""Error downloading %s/%s"" % (self.straight_url, remote_filename)) BackendException: Error downloading s3+http://6087053278/uploads/duplicity- full-signatures.20121108T103500Z.sigtar.gz BackendException: Error downloading s3+http://6087053278/uploads/duplicity- full-signatures.20121108T103500Z.sigtar.gz ```",14 118020603,2014-03-12 21:49:56.072,limit upload/download bandwidth (lp:#1291633),"[Original report](https://bugs.launchpad.net/bugs/1291633) created by **hovis (hovis)** ``` It would be useful to have the option to restrict the bandwidth Duplicity uses when uploading/downloading. Ie, on a 10Mb line, restrict to 5Mb so the line is not saturated during a long backup. One option when using scp:// used to be to add:- --scp-command ""scp -l 5120"" but this no longer works due to scp-command option being depreciated. Can a generic 'bandwidth limit' option be added? ``` Original tags: bandwidth rate restrict",22 118023076,2014-03-08 19:22:24.488,Backup unrestorable (lp:#1289850),"[Original report](https://bugs.launchpad.net/bugs/1289850) created by **Jörg Rolfsmeier (seelenmeier)** ``` Information: using : ubuntu 12.04.lts deja-dup 22.0-0ubuntu4 duplicity 0.6.18-0ubuntu3.4 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1414, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1407, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1287, in main globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 691, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 814, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 804, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 97, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20140129T192751Z.vol1.difftar.gpg', 2: 'duplicity-full.20140129T192751Z.vol2.difftar.gpg', 3: 'duplicity- full.20140129T192751Z.vol3.difftar.gpg', 4: 'duplicity- full.20140129T192751Z.vol4.difftar.gpg', 5: 'duplicity- full.20140129T192751Z.vol5.difftar.gpg', 6: 'duplicity- full.20140129T192751Z.vol6.difftar.gpg', 7: 'duplicity- full.20140129T192751Z.vol7.difftar.gpg', 8: 'duplicity- full.20140129T192751Z.vol8.difftar.gpg', 9: 'duplicity- full.20140129T192751Z.vol9.difftar.gpg', 10: 'duplicity- full.20140129T192751Z.vol10.difftar.gpg', 11: 'duplicity- full.20140129T192751Z.vol11.difftar.gpg', 12: 'duplicity- full.20140129T192751Z.vol12.difftar.gpg', 13: 'duplicity- full.20140129T192751Z.vol13.difftar.gpg', 14: 'duplicity- full.20140129T192751Z.vol14.difftar.gpg', 15: 'duplicity- full.20140129T192751Z.vol15.difftar.gpg', 16: 'duplicity- full.20140129T192751Z.vol16.difftar.gpg', 17: 'duplicity- full.20140129T192751Z.vol17.difftar.gpg', 18: 'duplicity- full.20140129T192751Z.vol18.difftar.gpg', 19: 'duplicity- full.20140129T192751Z.vol19.difftar.gpg', 20: 'duplicity- full.20140129T192751Z.vol20.difftar.gpg', 21: 'duplicity- full.20140129T192751Z.vol21.difftar.gpg', 22: 'duplicity- full.20140129T192751Z.vol22.difftar.gpg', 23: 'duplicity- full.20140129T192751Z.vol23.difftar.gpg', 24: 'duplicity- full.20140129T192751Z.vol24.difftar.gpg', 25: 'duplicity- full.20140129T192751Z.vol25.difftar.gpg', 26: 'duplicity- full.20140129T192751Z.vol26.difftar.gpg', 27: 'duplicity- full.20140129T192751Z.vol27.difftar.gpg', 28: 'duplicity- full.20140129T192751Z.vol28.difftar.gpg', 29: 'duplicity- full.20140129T192751Z.vol29.difftar.gpg', 30: 'duplicity- full.20140129T192751Z.vol30.difftar.gpg', 31: 'duplicity- full.20140129T192751Z.vol31.difftar.gpg', 32: 'duplicity- full.20140129T192751Z.vol32.difftar.gpg', 33: 'duplicity- full.20140129T192751Z.vol33.difftar.gpg', 34: 'duplicity- full.20140129T192751Z.vol34.difftar.gpg', 35: 'duplicity- full.20140129T192751Z.vol35.difftar.gpg', 36: 'duplicity- full.20140129T192751Z.vol36.difftar.gpg', 37: 'duplicity- full.20140129T192751Z.vol37.difftar.gpg', 38: 'duplicity- full.20140129T192751Z.vol38.difftar.gpg', 39: 'duplicity- full.20140129T192751Z.vol39.difftar.gpg', 40: 'duplicity- full.20140129T192751Z.vol40.difftar.gpg', 41: 'duplicity- full.20140129T192751Z.vol41.difftar.gpg', 42: 'duplicity- full.20140129T192751Z.vol42.difftar.gpg', 43: 'duplicity- full.20140129T192751Z.vol43.difftar.gpg', 44: 'duplicity- full.20140129T192751Z.vol44.difftar.gpg', 45: 'duplicity- full.20140129T192751Z.vol45.difftar.gpg', 46: 'duplicity- full.20140129T192751Z.vol46.difftar.gpg', 47: 'duplicity- full.20140129T192751Z.vol47.difftar.gpg', 48: 'duplicity- full.20140129T192751Z.vol48.difftar.gpg', 49: 'duplicity- full.20140129T192751Z.vol49.difftar.gpg', 50: 'duplicity- full.20140129T192751Z.vol50.difftar.gpg', 51: 'duplicity- full.20140129T192751Z.vol51.difftar.gpg', 52: 'duplicity- full.20140129T192751Z.vol52.difftar.gpg', 53: 'duplicity- full.20140129T192751Z.vol53.difftar.gpg', 54: 'duplicity- full.20140129T192751Z.vol54.difftar.gpg', 55: 'duplicity- full.20140129T192751Z.vol55.difftar.gpg', 56: 'duplicity- full.20140129T192751Z.vol56.difftar.gpg', 57: 'duplicity- full.20140129T192751Z.vol57.difftar.gpg', 58: 'duplicity- full.20140129T192751Z.vol58.difftar.gpg', 59: 'duplicity- full.20140129T192751Z.vol59.difftar.gpg', 60: 'duplicity- full.20140129T192751Z.vol60.difftar.gpg', 61: 'duplicity- full.20140129T192751Z.vol61.difftar.gpg', 62: 'duplicity- full.20140129T192751Z.vol62.difftar.gpg', 63: 'duplicity- full.20140129T192751Z.vol63.difftar.gpg', 64: 'duplicity- full.20140129T192751Z.vol64.difftar.gpg', 65: 'duplicity- full.20140129T192751Z.vol65.difftar.gpg', 66: 'duplicity- full.20140129T192751Z.vol66.difftar.gpg', 67: 'duplicity- full.20140129T192751Z.vol67.difftar.gpg', 68: 'duplicity- full.20140129T192751Z.vol68.difftar.gpg', 69: 'duplicity- full.20140129T192751Z.vol69.difftar.gpg', 70: 'duplicity- full.20140129T192751Z.vol70.difftar.gpg', 71: 'duplicity- full.20140129T192751Z.vol71.difftar.gpg', 72: 'duplicity- full.20140129T192751Z.vol72.difftar.gpg', 73: 'duplicity- full.20140129T192751Z.vol73.difftar.gpg', 74: 'duplicity- full.20140129T192751Z.vol74.difftar.gpg', 75: 'duplicity- full.20140129T192751Z.vol75.difftar.gpg', 76: 'duplicity- full.20140129T192751Z.vol76.difftar.gpg', 77: 'duplicity- full.20140129T192751Z.vol77.difftar.gpg', 78: 'duplicity- full.20140129T192751Z.vol78.difftar.gpg', 79: 'duplicity- full.20140129T192751Z.vol79.difftar.gpg', 80: 'duplicity- full.20140129T192751Z.vol80.difftar.gpg', 81: 'duplicity- full.20140129T192751Z.vol81.difftar.gpg', 82: 'duplicity- full.20140129T192751Z.vol82.difftar.gpg', 83: 'duplicity- full.20140129T192751Z.vol83.difftar.gpg', 84: 'duplicity- full.20140129T192751Z.vol84.difftar.gpg', 85: 'duplicity- full.20140129T192751Z.vol85.difftar.gpg', 86: 'duplicity- full.20140129T192751Z.vol86.difftar.gpg', 87: 'duplicity- full.20140129T192751Z.vol87.difftar.gpg', 88: 'duplicity- full.20140129T192751Z.vol88.difftar.gpg', 89: 'duplicity- full.20140129T192751Z.vol89.difftar.gpg', 90: 'duplicity- full.20140129T192751Z.vol90.difftar.gpg', 91: 'duplicity- full.20140129T192751Z.vol91.difftar.gpg', 92: 'duplicity- full.20140129T192751Z.vol92.difftar.gpg', 93: 'duplicity- full.20140129T192751Z.vol93.difftar.gpg', 94: 'duplicity- full.20140129T192751Z.vol94.difftar.gpg', 95: 'duplicity- full.20140129T192751Z.vol95.difftar.gpg', 96: 'duplicity- full.20140129T192751Z.vol96.difftar.gpg', 97: 'duplicity- full.20140129T192751Z.vol97.difftar.gpg', 98: 'duplicity- full.20140129T192751Z.vol98.difftar.gpg', 99: 'duplicity- full.20140129T192751Z.vol99.difftar.gpg', 100: 'duplicity- full.20140129T192751Z.vol100.difftar.gpg', 101: 'duplicity- full.20140129T192751Z.vol101.difftar.gpg', 102: 'duplicity- full.20140129T192751Z.vol102.difftar.gpg', 103: 'duplicity- full.20140129T192751Z.vol103.difftar.gpg', 104: 'duplicity- full.20140129T192751Z.vol104.difftar.gpg', 105: 'duplicity- full.20140129T192751Z.vol105.difftar.gpg', 106: 'duplicity- full.20140129T192751Z.vol106.difftar.gpg', 107: 'duplicity- full.20140129T192751Z.vol107.difftar.gpg', 108: 'duplicity- full.20140129T192751Z.vol108.difftar.gpg', 109: 'duplicity- full.20140129T192751Z.vol109.difftar.gpg', 110: 'duplicity- full.20140129T192751Z.vol110.difftar.gpg', 111: 'duplicity- full.20140129T192751Z.vol111.difftar.gpg', 112: 'duplicity- full.20140129T192751Z.vol112.difftar.gpg', 113: 'duplicity- full.20140129T192751Z.vol113.difftar.gpg', 114: 'duplicity- full.20140129T192751Z.vol114.difftar.gpg', 115: 'duplicity- full.20140129T192751Z.vol115.difftar.gpg', 116: 'duplicity- full.20140129T192751Z.vol116.difftar.gpg', 117: 'duplicity- full.20140129T192751Z.vol117.difftar.gpg', 118: 'duplicity- full.20140129T192751Z.vol118.difftar.gpg', 119: 'duplicity- full.20140129T192751Z.vol119.difftar.gpg', 120: 'duplicity- full.20140129T192751Z.vol120.difftar.gpg', 121: 'duplicity- full.20140129T192751Z.vol121.difftar.gpg', 122: 'duplicity- full.20140129T192751Z.vol122.difftar.gpg', 123: 'duplicity- full.20140129T192751Z.vol123.difftar.gpg', 124: 'duplicity- full.20140129T192751Z.vol124.difftar.gpg', 125: 'duplicity- full.20140129T192751Z.vol125.difftar.gpg', 126: 'duplicity- full.20140129T192751Z.vol126.difftar.gpg', 127: 'duplicity- full.20140129T192751Z.vol127.difftar.gpg', 128: 'duplicity- full.20140129T192751Z.vol128.difftar.gpg', 129: 'duplicity- full.20140129T192751Z.vol129.difftar.gpg', 130: 'duplicity- full.20140129T192751Z.vol130.difftar.gpg', 131: 'duplicity- full.20140129T192751Z.vol131.difftar.gpg', 132: 'duplicity- full.20140129T192751Z.vol132.difftar.gpg', 133: 'duplicity- full.20140129T192751Z.vol133.difftar.gpg', 134: 'duplicity- full.20140129T192751Z.vol134.difftar.gpg', 135: 'duplicity- full.20140129T192751Z.vol135.difftar.gpg', 136: 'duplicity- full.20140129T192751Z.vol136.difftar.gpg', 137: 'duplicity- full.20140129T192751Z.vol137.difftar.gpg', 138: 'duplicity- full.20140129T192751Z.vol138.difftar.gpg', 139: 'duplicity- full.20140129T192751Z.vol139.difftar.gpg', 140: 'duplicity- full.20140129T192751Z.vol140.difftar.gpg', 141: 'duplicity- full.20140129T192751Z.vol141.difftar.gpg', 142: 'duplicity- full.20140129T192751Z.vol142.difftar.gpg', 143: 'duplicity- full.20140129T192751Z.vol143.difftar.gpg', 144: 'duplicity- full.20140129T192751Z.vol144.difftar.gpg', 145: 'duplicity- full.20140129T192751Z.vol145.difftar.gpg', 146: 'duplicity- full.20140129T192751Z.vol146.difftar.gpg', 147: 'duplicity- full.20140129T192751Z.vol147.difftar.gpg', 148: 'duplicity- full.20140129T192751Z.vol148.difftar.gpg', 149: 'duplicity- full.20140129T192751Z.vol149.difftar.gpg', 150: 'duplicity- full.20140129T192751Z.vol150.difftar.gpg', 151: 'duplicity- full.20140129T192751Z.vol151.difftar.gpg', 152: 'duplicity- full.20140129T192751Z.vol152.difftar.gpg', 153: 'duplicity- full.20140129T192751Z.vol153.difftar.gpg', 154: 'duplicity- full.20140129T192751Z.vol154.difftar.gpg', 155: 'duplicity- full.20140129T192751Z.vol155.difftar.gpg', 156: 'duplicity- full.20140129T192751Z.vol156.difftar.gpg', 157: 'duplicity- full.20140129T192751Z.vol157.difftar.gpg', 158: 'duplicity- full.20140129T192751Z.vol158.difftar.gpg', 159: 'duplicity- full.20140129T192751Z.vol159.difftar.gpg', 160: 'duplicity- full.20140129T192751Z.vol160.difftar.gpg', 161: 'duplicity- full.20140129T192751Z.vol161.difftar.gpg', 162: 'duplicity- full.20140129T192751Z.vol162.difftar.gpg', 163: 'duplicity- full.20140129T192751Z.vol163.difftar.gpg', 164: 'duplicity- full.20140129T192751Z.vol164.difftar.gpg', 165: 'duplicity- full.20140129T192751Z.vol165.difftar.gpg', 166: 'duplicity- full.20140129T192751Z.vol166.difftar.gpg', 167: 'duplicity- full.20140129T192751Z.vol167.difftar.gpg', 168: 'duplicity- full.20140129T192751Z.vol168.difftar.gpg', 169: 'duplicity- full.20140129T192751Z.vol169.difftar.gpg', 170: 'duplicity- full.20140129T192751Z.vol170.difftar.gpg', 171: 'duplicity- full.20140129T192751Z.vol171.difftar.gpg', 172: 'duplicity- full.20140129T192751Z.vol172.difftar.gpg', 173: 'duplicity- full.20140129T192751Z.vol173.difftar.gpg', 174: 'duplicity- full.20140129T192751Z.vol174.difftar.gpg', 175: 'duplicity- full.20140129T192751Z.vol175.difftar.gpg', 176: 'duplicity- full.20140129T192751Z.vol176.difftar.gpg', 177: 'duplicity- full.20140129T192751Z.vol177.difftar.gpg', 178: 'duplicity- full.20140129T192751Z.vol178.difftar.gpg', 179: 'duplicity- full.20140129T192751Z.vol179.difftar.gpg', 180: 'duplicity- full.20140129T192751Z.vol180.difftar.gpg', 181: 'duplicity- full.20140129T192751Z.vol181.difftar.gpg', 182: 'duplicity- full.20140129T192751Z.vol182.difftar.gpg', 183: 'duplicity- full.20140129T192751Z.vol183.difftar.gpg', 184: 'duplicity- full.20140129T192751Z.vol184.difftar.gpg', 185: 'duplicity- full.20140129T192751Z.vol185.difftar.gpg', 186: 'duplicity- full.20140129T192751Z.vol186.difftar.gpg', 187: 'duplicity- full.20140129T192751Z.vol187.difftar.gpg', 188: 'duplicity- full.20140129T192751Z.vol188.difftar.gpg', 189: 'duplicity- full.20140129T192751Z.vol189.difftar.gpg', 190: 'duplicity- full.20140129T192751Z.vol190.difftar.gpg', 191: 'duplicity- full.20140129T192751Z.vol191.difftar.gpg', 192: 'duplicity- full.20140129T192751Z.vol192.difftar.gpg', 193: 'duplicity- full.20140129T192751Z.vol193.difftar.gpg', 194: 'duplicity- full.20140129T192751Z.vol194.difftar.gpg', 195: 'duplicity- full.20140129T192751Z.vol195.difftar.gpg', 196: 'duplicity- full.20140129T192751Z.vol196.difftar.gpg', 197: 'duplicity- full.20140129T192751Z.vol197.difftar.gpg', 198: 'duplicity- full.20140129T192751Z.vol198.difftar.gpg', 199: 'duplicity- full.20140129T192751Z.vol199.difftar.gpg', 200: 'duplicity- full.20140129T192751Z.vol200.difftar.gpg', 201: 'duplicity- full.20140129T192751Z.vol201.difftar.gpg', 202: 'duplicity- full.20140129T192751Z.vol202.difftar.gpg', 203: 'duplicity- full.20140129T192751Z.vol203.difftar.gpg', 204: 'duplicity- full.20140129T192751Z.vol204.difftar.gpg', 205: 'duplicity- full.20140129T192751Z.vol205.difftar.gpg', 206: 'duplicity- full.20140129T192751Z.vol206.difftar.gpg', 207: 'duplicity- full.20140129T192751Z.vol207.difftar.gpg', 208: 'duplicity- full.20140129T192751Z.vol208.difftar.gpg', 209: 'duplicity- full.20140129T192751Z.vol209.difftar.gpg', 210: 'duplicity- full.20140129T192751Z.vol210.difftar.gpg', 211: 'duplicity- full.20140129T192751Z.vol211.difftar.gpg', 212: 'duplicity- full.20140129T192751Z.vol212.difftar.gpg', 213: 'duplicity- full.20140129T192751Z.vol213.difftar.gpg', 214: 'duplicity- full.20140129T192751Z.vol214.difftar.gpg', 215: 'duplicity- full.20140129T192751Z.vol215.difftar.gpg', 216: 'duplicity- full.20140129T192751Z.vol216.difftar.gpg', 217: 'duplicity- full.20140129T192751Z.vol217.difftar.gpg', 218: 'duplicity- full.20140129T192751Z.vol218.difftar.gpg', 219: 'duplicity- full.20140129T192751Z.vol219.difftar.gpg', 220: 'duplicity- full.20140129T192751Z.vol220.difftar.gpg', 221: 'duplicity- full.20140129T192751Z.vol221.difftar.gpg', 222: 'duplicity- full.20140129T192751Z.vol222.difftar.gpg', 223: 'duplicity- full.20140129T192751Z.vol223.difftar.gpg', 224: 'duplicity- full.20140129T192751Z.vol224.difftar.gpg', 225: 'duplicity- full.20140129T192751Z.vol225.difftar.gpg', 226: 'duplicity- full.20140129T192751Z.vol226.difftar.gpg', 227: 'duplicity- full.20140129T192751Z.vol227.difftar.gpg', 228: 'duplicity- full.20140129T192751Z.vol228.difftar.gpg', 229: 'duplicity- full.20140129T192751Z.vol229.difftar.gpg', 230: 'duplicity- full.20140129T192751Z.vol230.difftar.gpg', 231: 'duplicity- full.20140129T192751Z.vol231.difftar.gpg', 232: 'duplicity- full.20140129T192751Z.vol232.difftar.gpg', 233: 'duplicity- full.20140129T192751Z.vol233.difftar.gpg', 234: 'duplicity- full.20140129T192751Z.vol234.difftar.gpg', 235: 'duplicity- full.20140129T192751Z.vol235.difftar.gpg', 236: 'duplicity- full.20140129T192751Z.vol236.difftar.gpg', 237: 'duplicity- full.20140129T192751Z.vol237.difftar.gpg', 238: 'duplicity- full.20140129T192751Z.vol238.difftar.gpg', 239: 'duplicity- full.20140129T192751Z.vol239.difftar.gpg', 240: 'duplicity- full.20140129T192751Z.vol240.difftar.gpg', 241: 'duplicity- full.20140129T192751Z.vol241.difftar.gpg', 242: 'duplicity- full.20140129T192751Z.vol242.difftar.gpg', 243: 'duplicity- full.20140129T192751Z.vol243.difftar.gpg', 244: 'duplicity- full.20140129T192751Z.vol244.difftar.gpg', 245: 'duplicity- full.20140129T192751Z.vol245.difftar.gpg', 246: 'duplicity- full.20140129T192751Z.vol246.difftar.gpg', 247: 'duplicity- full.20140129T192751Z.vol247.difftar.gpg', 248: 'duplicity- full.20140129T192751Z.vol248.difftar.gpg', 249: 'duplicity- full.20140129T192751Z.vol249.difftar.gpg', 250: 'duplicity- full.20140129T192751Z.vol250.difftar.gpg', 251: 'duplicity- full.20140129T192751Z.vol251.difftar.gpg', 252: 'duplicity- full.20140129T192751Z.vol252.difftar.gpg', 253: 'duplicity- full.20140129T192751Z.vol253.difftar.gpg', 254: 'duplicity- full.20140129T192751Z.vol254.difftar.gpg', 255: 'duplicity- full.20140129T192751Z.vol255.difftar.gpg', 256: 'duplicity- full.20140129T192751Z.vol256.difftar.gpg', 257: 'duplicity- full.20140129T192751Z.vol257.difftar.gpg', 258: 'duplicity- full.20140129T192751Z.vol258.difftar.gpg', 259: 'duplicity- full.20140129T192751Z.vol259.difftar.gpg', 260: 'duplicity- full.20140129T192751Z.vol260.difftar.gpg', 261: 'duplicity- full.20140129T192751Z.vol261.difftar.gpg', 262: 'duplicity- full.20140129T192751Z.vol262.difftar.gpg', 263: 'duplicity- full.20140129T192751Z.vol263.difftar.gpg', 264: 'duplicity- full.20140129T192751Z.vol264.difftar.gpg', 265: 'duplicity- full.20140129T192751Z.vol265.difftar.gpg', 266: 'duplicity- full.20140129T192751Z.vol266.difftar.gpg', 267: 'duplicity- full.20140129T192751Z.vol267.difftar.gpg', 268: 'duplicity- full.20140129T192751Z.vol268.difftar.gpg', 269: 'duplicity- full.20140129T192751Z.vol269.difftar.gpg', 270: 'duplicity- full.20140129T192751Z.vol270.difftar.gpg', 271: 'duplicity- full.20140129T192751Z.vol271.difftar.gpg', 272: 'duplicity- full.20140129T192751Z.vol272.difftar.gpg', 273: 'duplicity- full.20140129T192751Z.vol273.difftar.gpg', 274: 'duplicity- full.20140129T192751Z.vol274.difftar.gpg', 275: 'duplicity- full.20140129T192751Z.vol275.difftar.gpg', 276: 'duplicity- full.20140129T192751Z.vol276.difftar.gpg', 277: 'duplicity- full.20140129T192751Z.vol277.difftar.gpg', 278: 'duplicity- full.20140129T192751Z.vol278.difftar.gpg', 279: 'duplicity- full.20140129T192751Z.vol279.difftar.gpg', 280: 'duplicity- full.20140129T192751Z.vol280.difftar.gpg', 281: 'duplicity- full.20140129T192751Z.vol281.difftar.gpg', 282: 'duplicity- full.20140129T192751Z.vol282.difftar.gpg', 283: 'duplicity- full.20140129T192751Z.vol283.difftar.gpg', 284: 'duplicity- full.20140129T192751Z.vol284.difftar.gpg', 285: 'duplicity- full.20140129T192751Z.vol285.difftar.gpg', 286: 'duplicity- full.20140129T192751Z.vol286.difftar.gpg', 287: 'duplicity- full.20140129T192751Z.vol287.difftar.gpg', 288: 'duplicity- full.20140129T192751Z.vol288.difftar.gpg', 289: 'duplicity- full.20140129T192751Z.vol289.difftar.gpg', 290: 'duplicity- full.20140129T192751Z.vol290.difftar.gpg', 291: 'duplicity- full.20140129T192751Z.vol291.difftar.gpg', 292: 'duplicity- full.20140129T192751Z.vol292.difftar.gpg', 293: 'duplicity- full.20140129T192751Z.vol293.difftar.gpg', 294: 'duplicity- full.20140129T192751Z.vol294.difftar.gpg', 295: 'duplicity- full.20140129T192751Z.vol295.difftar.gpg', 296: 'duplicity- full.20140129T192751Z.vol296.difftar.gpg', 297: 'duplicity- full.20140129T192751Z.vol297.difftar.gpg', 298: 'duplicity- full.20140129T192751Z.vol298.difftar.gpg', 299: 'duplicity- full.20140129T192751Z.vol299.difftar.gpg', 300: 'duplicity- full.20140129T192751Z.vol300.difftar.gpg', 301: 'duplicity- full.20140129T192751Z.vol301.difftar.gpg', 302: 'duplicity- full.20140129T192751Z.vol302.difftar.gpg', 303: 'duplicity- full.20140129T192751Z.vol303.difftar.gpg', 304: 'duplicity- full.20140129T192751Z.vol304.difftar.gpg', 305: 'duplicity- full.20140129T192751Z.vol305.difftar.gpg', 306: 'duplicity- full.20140129T192751Z.vol306.difftar.gpg', 307: 'duplicity- full.20140129T192751Z.vol307.difftar.gpg', 308: 'duplicity- full.20140129T192751Z.vol308.difftar.gpg', 309: 'duplicity- full.20140129T192751Z.vol309.difftar.gpg', 310: 'duplicity- full.20140129T192751Z.vol310.difftar.gpg', 311: 'duplicity- full.20140129T192751Z.vol311.difftar.gpg', 312: 'duplicity- full.20140129T192751Z.vol312.difftar.gpg', 313: 'duplicity- full.20140129T192751Z.vol313.difftar.gpg', 314: 'duplicity- full.20140129T192751Z.vol314.difftar.gpg', 315: 'duplicity- full.20140129T192751Z.vol315.difftar.gpg', 316: 'duplicity- full.20140129T192751Z.vol316.difftar.gpg', 317: 'duplicity- full.20140129T192751Z.vol317.difftar.gpg', 318: 'duplicity- full.20140129T192751Z.vol318.difftar.gpg', 319: 'duplicity- full.20140129T192751Z.vol319.difftar.gpg', 320: 'duplicity- full.20140129T192751Z.vol320.difftar.gpg', 321: 'duplicity- full.20140129T192751Z.vol321.difftar.gpg', 322: 'duplicity- full.20140129T192751Z.vol322.difftar.gpg', 323: 'duplicity- full.20140129T192751Z.vol323.difftar.gpg', 324: 'duplicity- full.20140129T192751Z.vol324.difftar.gpg', 325: 'duplicity- full.20140129T192751Z.vol325.difftar.gpg', 326: 'duplicity- full.20140129T192751Z.vol326.difftar.gpg', 327: 'duplicity- full.20140129T192751Z.vol327.difftar.gpg', 328: 'duplicity- full.20140129T192751Z.vol328.difftar.gpg', 329: 'duplicity- full.20140129T192751Z.vol329.difftar.gpg', 330: 'duplicity- full.20140129T192751Z.vol330.difftar.gpg', 331: 'duplicity- full.20140129T192751Z.vol331.difftar.gpg', 332: 'duplicity- full.20140129T192751Z.vol332.difftar.gpg', 333: 'duplicity- full.20140129T192751Z.vol333.difftar.gpg', 334: 'duplicity- full.20140129T192751Z.vol334.difftar.gpg', 335: 'duplicity- full.20140129T192751Z.vol335.difftar.gpg', 336: 'duplicity- full.20140129T192751Z.vol336.difftar.gpg', 337: 'duplicity- full.20140129T192751Z.vol337.difftar.gpg', 338: 'duplicity- full.20140129T192751Z.vol338.difftar.gpg', 339: 'duplicity- full.20140129T192751Z.vol339.difftar.gpg', 340: 'duplicity- full.20140129T192751Z.vol340.difftar.gpg', 341: 'duplicity- full.20140129T192751Z.vol341.difftar.gpg', 342: 'duplicity- full.20140129T192751Z.vol342.difftar.gpg', 343: 'duplicity- full.20140129T192751Z.vol343.difftar.gpg', 344: 'duplicity- full.20140129T192751Z.vol344.difftar.gpg', 345: 'duplicity- full.20140129T192751Z.vol345.difftar.gpg', 346: 'duplicity- full.20140129T192751Z.vol346.difftar.gpg', 347: 'duplicity- full.20140129T192751Z.vol347.difftar.gpg', 348: 'duplicity- full.20140129T192751Z.vol348.difftar.gpg', 349: 'duplicity- full.20140129T192751Z.vol349.difftar.gpg', 350: 'duplicity- full.20140129T192751Z.vol350.difftar.gpg', 351: 'duplicity- full.20140129T192751Z.vol351.difftar.gpg', 352: 'duplicity- full.20140129T192751Z.vol352.difftar.gpg', 353: 'duplicity- full.20140129T192751Z.vol353.difftar.gpg', 354: 'duplicity- full.20140129T192751Z.vol354.difftar.gpg', 355: 'duplicity- full.20140129T192751Z.vol355.difftar.gpg', 356: 'duplicity- full.20140129T192751Z.vol356.difftar.gpg', 357: 'duplicity- full.20140129T192751Z.vol357.difftar.gpg', 358: 'duplicity- full.20140129T192751Z.vol358.difftar.gpg', 359: 'duplicity- full.20140129T192751Z.vol359.difftar.gpg', 360: 'duplicity- full.20140129T192751Z.vol360.difftar.gpg', 361: 'duplicity- full.20140129T192751Z.vol361.difftar.gpg', 362: 'duplicity- full.20140129T192751Z.vol362.difftar.gpg', 363: 'duplicity- full.20140129T192751Z.vol363.difftar.gpg', 364: 'duplicity- full.20140129T192751Z.vol364.difftar.gpg', 365: 'duplicity- full.20140129T192751Z.vol365.difftar.gpg', 366: 'duplicity- full.20140129T192751Z.vol366.difftar.gpg', 367: 'duplicity- full.20140129T192751Z.vol367.difftar.gpg', 368: 'duplicity- full.20140129T192751Z.vol368.difftar.gpg', 369: 'duplicity- full.20140129T192751Z.vol369.difftar.gpg', 370: 'duplicity- full.20140129T192751Z.vol370.difftar.gpg', 371: 'duplicity- full.20140129T192751Z.vol371.difftar.gpg', 372: 'duplicity- full.20140129T192751Z.vol372.difftar.gpg', 373: 'duplicity- full.20140129T192751Z.vol373.difftar.gpg', 374: 'duplicity- full.20140129T192751Z.vol374.difftar.gpg', 375: 'duplicity- full.20140129T192751Z.vol375.difftar.gpg', 376: 'duplicity- full.20140129T192751Z.vol376.difftar.gpg', 377: 'duplicity- full.20140129T192751Z.vol377.difftar.gpg', 378: 'duplicity- full.20140129T192751Z.vol378.difftar.gpg', 379: 'duplicity- full.20140129T192751Z.vol379.difftar.gpg', 380: 'duplicity- full.20140129T192751Z.vol380.difftar.gpg', 381: 'duplicity- full.20140129T192751Z.vol381.difftar.gpg', 382: 'duplicity- full.20140129T192751Z.vol382.difftar.gpg'}, 'duplicity- full.20140129T192751Z.vol21.difftar') ```",6 118022537,2014-02-23 19:26:34.880,Failure to restore. Failed with unkown error: KeyError: 1 (lp:#1283799),"[Original report](https://bugs.launchpad.net/bugs/1283799) created by **dejuoops (stu3cla)** ``` I have Duplicity version 0.6.18. Python 2.7. I using Ubuntu 12.04 LTS, with this as the target filesystem. The response I get when I try to restore is this: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1414, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1407, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1341, in main restore(col_stats) File ""/usr/bin/duplicity"", line 632, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 526, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 498, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 380, in yield_tuples setrorps( overflow, elems ) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 369, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 113, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 330, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 324, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 668, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 1 I would much appreciate it is someone could help me, I really need to get my files restored. ```",10 118020598,2014-02-23 14:49:27.804,broken with newest pyrax (lp:#1283738),"[Original report](https://bugs.launchpad.net/bugs/1283738) created by **Simao (simaomm)** ``` After upgrading to duplicity 0.6.23-0ubuntu0ppa21~saucy1 cloudfiles support no longer works. First it fails saying the `cf+http` backend requires `pyrax`: BackendException: This backend requires the pyrax library available from Rackspace. After installing pyrax with `sudo pip install pyrax`, I get the following error: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1489, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1483, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1317, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1027, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 162, in get_backend return _backends[pu.scheme](pu) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/_cf_pyrax.py"", line 73, in __init__ self.container = pyrax.cloudfiles.create_container(container) AttributeError: 'NoneType' object has no attribute 'create_container' Also, there is the `--cf-backend` option documented in the man page, but duplicity doesn't actually support it: duplicity: error: no such option: --cf-backend duplicity 0.6.23 Python 2.7.5+ Ubuntu 13.10 ```",8 118022855,2014-02-04 21:49:23.148,Deja-Dup Restore Failure on FC19 (lp:#1276345),"[Original report](https://bugs.launchpad.net/bugs/1276345) created by **Val Eckertson (j-valeck)** ``` Fedora 19 x86_64 Latest version Restore error: Failed to read /home/val/.cache/deja- dup/tmp/duplicity-Y35aSf-tempdir/mktemp-2f1ney-1: (, IOError('CRC check failed 0xac6a31db != 0xbef8d0d5L',), ) 3. No such file found 4. No such file found Please see http://forums.fedoraforum.org/showthread.php?t=295900 Bug Submitted to Red Hat Bugzilla # 1060956 email: j.valeck@gmail.com ```",8 118020594,2014-01-26 02:18:22.930,UTF-8 comparison issue when using globbing file on Mac (lp:#1272814),"[Original report](https://bugs.launchpad.net/bugs/1272814) created by **Lucas (public-k)** ``` I've run into problems when using UTF-8 characters in a globbing file on OS X. So I'm trying to include a directory, let's call it ""Geschäftlich"", which contains a German umlaut. Said character may be represented in two (perhaps more) ways: Gesch\xc3\xa4ftlich This is the way my editor chose to represent it, as U+00E4, or ""LATIN SMALL LETTER A WITH DIAERESIS"". Unfortunately, my operating system begs to differ, and chooses the following form: Gescha\xcc\x88ftlich Here we have a plain old ""a"" character followed by U+0308, the ""COMBINING DIAERESIS"", which results in the same visual representation. The problem is then, that: >>> ""Gesch\xc3\xa4ftlich"" == ""Gescha\xcc\x88ftlich"" False So apparently the way to fix this is to ""normalize"" both unicode strings before comparison (http://stackoverflow.com/questions/16467479), like so: import unicodedata a = unicode(""Gesch\xc3\xa4ftlich"", ""UTF-8"") b = unicode(""Gescha\xcc\x88ftlich"", ""UTF-8"") >>> unicodedata.normalize(""NFC"", a) == unicodedata.normalize(""NFC"", b) True In the above example, NFC stands for ""Normal Form Composed"", as opposed to ""Normal Form Decomposed"" (NFD). Whether or not it is relevant which one you choose I have not figured out yet. I've figured out that the comparisons that fail due to this unicode snafu seem to be implemented inside the functions generated by ""glob_get_tuple_sf"" in ""selection.py"", but since I'm looking at the Duplicity source for the first time, it's kind of hard to tell whether or not this would've do be done elsewhere as well. I'd be willing to write a patch if someone would provide me with a little guidance on the matter :) Python Version: 2.7.5 Duplicity Version: 0.6.22 OS: Mac OS X 10.9 ```",6 118020587,2014-01-22 10:39:01.784,UnicodeEncodeError after Dropbox backup is done (lp:#1271481),"[Original report](https://bugs.launchpad.net/bugs/1271481) created by **Miro Hrončok (churchyard)** ``` Hi, I've got this eror each time I use Dropbox backend: $ duplicity ~ dpbx:// Local and Remote metadata are synchronized, no sync needed. Last full backup date: Tue Jan 14 12:51:14 2014 GnuPG passphrase: Retype passphrase to confirm: --------------[ Backup Statistics ]-------------- StartTime 1390385907.82 (Wed Jan 22 11:18:27 2014) EndTime 1390385950.76 (Wed Jan 22 11:19:10 2014) ElapsedTime 42.94 (42.94 seconds) SourceFiles 78321 SourceFileSize 9838764906 (9.16 GB) NewFiles 1283 NewFileSize 50808455 (48.5 MB) DeletedFiles 12 ChangedFiles 211 ChangedFileSize 62205324 (59.3 MB) ChangedDeltaSize 0 (0 bytes) DeltaEntries 1506 RawDeltaSize 59932930 (57.2 MB) TotalDestinationSizeChange 22618861 (21.6 MB) Errors 0 ------------------------------------------------- Exception ['ascii' codec can't encode character u'\u010d' in position 30: ordinal not in range(128)]: | Traceback (most recent call last): | File ""/usr/lib64/python2.7/site- packages/duplicity/backends/dpbxbackend.py"", line 83, in wrapper | return f(self, *args) | File ""/usr/lib64/python2.7/site- packages/duplicity/backends/dpbxbackend.py"", line 192, in close | log.Debug(':: %s=[%s]'%(k,info[k])) | File ""/usr/lib64/python2.7/site-packages/duplicity/log.py"", line 83, in Debug | Log(s, DEBUG) | File ""/usr/lib64/python2.7/site-packages/duplicity/log.py"", line 75, in Log | _logger.log(DupToLoggerLevel(verb_level), s.decode(""utf8"", ""ignore"")) | File ""/usr/lib64/python2.7/encodings/utf_8.py"", line 16, in decode | return codecs.utf_8_decode(input, errors, True) | UnicodeEncodeError: 'ascii' codec can't encode character u'\u010d' in position 30: ordinal not in range(128) dpbx code error ""'ascii' codec can't encode character u'\u010d' in position 30: ordinal not in range(128)"" Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1466, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1459, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1442, in main globals.backend.close() File ""/usr/lib64/python2.7/site- packages/duplicity/backends/dpbxbackend.py"", line 94, in wrapper raise e UnicodeEncodeError: 'ascii' codec can't encode character u'\u010d' in position 30: ordinal not in range(128) === EOF === It seems not to affect the backup. $ rpm -q duplicity duplicity-0.6.22-1.fc20.x86_64 $ rpm -q python python-2.7.5-9.fc20.x86_64 $ cat /etc/redhat-release Fedora release 20 (Heisenbug) ```",8 118020577,2014-01-13 11:28:57.941,"""IOError: [Errno 71] Protocol error"" during restore (lp:#1268558)","[Original report](https://bugs.launchpad.net/bugs/1268558) created by **Andre (gestatten-cox)** ``` During the encrypted restore the software crashes. Always on the same file. What can I do since my PC crashed and I need the restore urtgently? Tried different versions and even on different linux PCs. Same issue on the same file. Please help Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1414, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1407, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1341, in main restore(col_stats) File ""/usr/bin/duplicity"", line 632, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 528, in Write_ROPaths ITR( ropath.index, ropath ) File ""/usr/lib/python2.7/dist-packages/duplicity/lazy.py"", line 335, in __call__ last_branch.fast_process, args) File ""/usr/lib/python2.7/dist-packages/duplicity/robust.py"", line 37, in check_common_error return function(*args) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 581, in fast_process ropath.copy( self.base_path.new_index( index ) ) File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 426, in copy other.writefileobj(self.open(""rb"")) File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 600, in writefileobj fout = self.open(""wb"") File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 542, in open result = open(self.name, mode) IOError: [Errno 71] Protocol error: '/media/sf_Documents/restore/home/ubuntu/Documents/business/shopic Lampen/Invoices/Rechnung 2356015\n.pdf' ```",8 118022950,2014-01-09 20:41:15.294,Automatic Backup after fresh install of trusty fails: no such file or directory (lp:#1267590),"[Original report](https://bugs.launchpad.net/bugs/1267590) created by **David Ayers (ayers)** ``` While testing da daily snapshot to verfy a different issue I configured deja-dup to add the typical load to my system. I configured an SSH upload. deja-dup failed with: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1473, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1466, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1436, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 541, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 402, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 324, in GPGWriteFile file = GPGFile(True, path.Path(filename), profile) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 105, in __init__ self.logger_fp = tempfile.TemporaryFile( dir=tempdir.default().dir() ) File ""/usr/lib/python2.7/tempfile.py"", line 493, in TemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags) File ""/usr/lib/python2.7/tempfile.py"", line 239, in _mkstemp_inner fd = _os.open(file, flags, 0600) OSError: [Errno 2] No such file or directory: '/tmp/duplicity-dH5tyj- tempdir/tmpTVOITr' ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: deja-dup 29.1-0ubuntu4 ProcVersionSignature: Ubuntu 3.13.0-1.16-generic 3.13.0-rc7 Uname: Linux 3.13.0-1-generic x86_64 ApportVersion: 2.12.7-0ubuntu6 Architecture: amd64 CurrentDesktop: Unity Date: Thu Jan 9 21:36:37 2014 InstallationDate: Installed on 2014-01-09 (0 days ago) InstallationMedia: Ubuntu 14.04 LTS ""Trusty Tahr"" - Alpha amd64 (20140109) SourcePackage: deja-dup UpgradeStatus: No upgrade log present (probably fresh install) ``` Original tags: amd64 apport-bug trusty",14 118022939,2014-01-04 13:11:06.845,backup folder in ~/.cache/deja-dup/tmp does not exist (lp:#1266025),"[Original report](https://bugs.launchpad.net/bugs/1266025) created by **Andreas E. (andreas-e)** ``` Déjà-dup backups fail with an error message ""The backup folder »/home/username/.cache/deja-dup/tmp/duplicity-il- Au69-tempdir/mktemp-51Sz0x-3« does not exist."". This folder does indeed not exist. When clicking the close button, deja-dup aborted. For weeks not a single deja-dup backup completed successfully due to various reported bugs. When I attempt to restore files, I see that files have been backuped, but I cannot be sure whether the files are complete. This makes the backups absolutely useless because I have no guarantee that I can restore all the files. I would strongly recommend to review _all_ error messages that deja-dup might produce and allow users to continue an interrupted backup process when the issue has been resolved. ```",6 118023060,2014-01-03 00:36:38.985,Cannot restore backup copy (lp:#1265676),"[Original report](https://bugs.launchpad.net/bugs/1265676) created by **Edgar Ramón Herrera Morgado (e-r-h-m)** ``` Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1414, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1407, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1289, in main globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 693, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 816, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 806, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 97, in add_filename (self.volume_name_dict, filename) AssertionError: ({ All files --- Todos los archivos}) ```",6 118020573,2013-12-29 15:46:38.453,doesn't work with Python 2.4 in RHEL 5 (lp:#1264847),"[Original report](https://bugs.launchpad.net/bugs/1264847) created by **Rahul Sundaram (metherid)** ``` Duplicity README claims compatibility with 2.4 however it doesn't start with the base version of Python in RHEL 5 https://bugzilla.redhat.com/show_bug.cgi?id=976873 ```",6 118022602,2013-12-16 00:56:47.989,Australian/Asian S3 buckets (lp:#1261246),"[Original report](https://bugs.launchpad.net/bugs/1261246) created by **Daniel Lo Nigro (daniel15)** ``` The manpage for Duplicity mentions European S3 buckets (and the --s3-european-buckets option) but has no mention of other regions, like Asia/Australia. ```",6 118020569,2013-12-12 23:51:02.816,"""BackendException: ssh connection failed: 'SSHClient' object has no attribute 'known_hosts'"" (lp:#1260541)","[Original report](https://bugs.launchpad.net/bugs/1260541) created by **iceflatline (iceflatline)** ``` I'm receiving the following error when running the duplicity in conjunction with the verify command: ""BackendException: ssh connection to [host:port] failed: 'SSHClient' object has no attribute 'known_hosts'"" The command I'm using is constructed as follows: /usr/local/bin/duplicity verify --include /mnt/files/backup/ --exclude '**' scp://backup@//mnt/backup/weekly/ /mnt/files/ This has worked perfectly up until a new IP address was assigned to [host]. Now of course the system needs to (re)confirm my intent to connect but can't populate a known_hosts file. Using duplicity-0.6.22_1 under FreeBSD 9.2-RELEASE. Target [host] is also FreeBSD 9.2-RELEASE. Note these are not VMs, therefore this bug appears to be different the one reported under Bug #1197092 . Suggested work-arounds for this problem would be appreciated. ```",6 118020567,2013-12-06 17:11:08.730,Duplicity downloads old signatures when they are irrelevant (lp:#1258584),"[Original report](https://bugs.launchpad.net/bugs/1258584) created by **Dan Flexy (o-m7-q)** ``` I wonder why Duplicity is not smart enough not to download old signatures when they are irrelevant for the task in hand? Doing a standard, last backup restore. There is a full backup made few days ago and a couple of relevant incremental changes. Instead of downloading last duplicity-full file and it's increments, duplicity starts to pull all signatures since a year ago! WHY? As they are totally irrelevant for the task requested. I know I can clean them up, but this is not what I want to do as they are kept in case point in time recovery is needed. Hasn't the authors thought this simple behaviour or is there a reason it downloads all these old signatures? ```",6 118020565,2013-12-02 11:47:51.538,OverflowError while restoring a backup (ecrypted/local repo) (lp:#1256897),"[Original report](https://bugs.launchpad.net/bugs/1256897) created by **Socket (nick-regist)** ``` Duplicity version: 0.6.22 (from tar.gz downloaded http://duplicity.nongnu.org/ ) Python version: 2.7.3 OS Distro: Kali Linux 1.0.5 32-Bit Type of target filesystem: Win 7 NTFS (actually I'm doing the restore via a VM with both repository and destionatio are mounted as shared folders-- vboxsf. I preferred this setup because using cygwin I had other exceptions related to fork() and such) Description: While restoring a backup from a local repository, encrypted with gnupg, i got ""OverflowError: Python int too large to convert to C long"". Being this information quite urgent I'm asking whether there is a ""manual"" way to retrieve whatever contained in the backup. root@Kali:~# /usr/local/bin/duplicity -v9 file:///root/bkp/BKP_Reply/ dest/ Using archive dir: /root/.cache/duplicity/fdd725b8342168a02dea8939d396fc90 Using backup name: fdd725b8342168a02dea8939d396fc90 Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.tahoebackend Succeeded Main action: restore ================================================================================ duplicity 0.6.22 (August 22, 2013) Args: /usr/local/bin/duplicity -v9 file:///root/bkp/BKP_Reply/ dest/ Linux Kali 3.7-trunk-686-pae #1 SMP Debian 3.7.2-0+kali8 i686 /usr/bin/python 2.7.3 (default, Jan 2 2013, 16:53:07) [GCC 4.7.2] ================================================================================ Using temporary directory /tmp/duplicity-L0anjV-tempdir Registering (mkstemp) temporary file /tmp/duplicity-L0anjV-tempdir/mkstemp- cSTvG5-1 Temp has 8973217792 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 18 files exist on backend 6 files exist in cache Extracting backup chains from list of files: ['duplicity-full- signatures.20131111T011109Z.sigtar.gpg', 'duplicity- full.20131111T011109Z.manifest.gpg', 'duplicity- full.20131111T011109Z.vol1.difftar.gpg', 'duplicity- full.20131111T011109Z.vol2.difftar.gpg', 'duplicity- full.20131111T011109Z.vol3.difftar.gpg', 'duplicity- full.20131111T011109Z.vol4.difftar.gpg', 'duplicity- full.20131111T011109Z.vol5.difftar.gpg', 'duplicity- full.20131111T011109Z.vol6.difftar.gpg', 'duplicity- full.20131111T011109Z.vol7.difftar.gpg', 'duplicity- full.20131111T011109Z.vol8.difftar.gpg', 'duplicity- inc.20131111T011109Z.to.20131127T214428Z.manifest.gpg', 'duplicity- inc.20131111T011109Z.to.20131127T214428Z.vol1.difftar.gpg', 'duplicity- inc.20131111T011109Z.to.20131127T214428Z.vol2.difftar.gpg', 'duplicity- inc.20131111T011109Z.to.20131127T214428Z.vol3.difftar.gpg', 'duplicity- inc.20131127T214428Z.to.20131128T001021Z.manifest.gpg', 'duplicity- inc.20131127T214428Z.to.20131128T001021Z.vol1.difftar.gpg', 'duplicity-new- signatures.20131111T011109Z.to.20131127T214428Z.sigtar.gpg', 'duplicity- new-signatures.20131127T214428Z.to.20131128T001021Z.sigtar.gpg'] File duplicity-full-signatures.20131111T011109Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20131111T011109Z.sigtar.gpg' File duplicity-full.20131111T011109Z.manifest.gpg is not part of a known set; creating new set File duplicity-full.20131111T011109Z.vol1.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol2.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol3.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol4.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol5.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol6.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol7.difftar.gpg is part of known set File duplicity-full.20131111T011109Z.vol8.difftar.gpg is part of known set File duplicity-inc.20131111T011109Z.to.20131127T214428Z.manifest.gpg is not part of a known set; creating new set File duplicity-inc.20131111T011109Z.to.20131127T214428Z.vol1.difftar.gpg is part of known set File duplicity-inc.20131111T011109Z.to.20131127T214428Z.vol2.difftar.gpg is part of known set File duplicity-inc.20131111T011109Z.to.20131127T214428Z.vol3.difftar.gpg is part of known set File duplicity-inc.20131127T214428Z.to.20131128T001021Z.manifest.gpg is not part of a known set; creating new set File duplicity-inc.20131127T214428Z.to.20131128T001021Z.vol1.difftar.gpg is part of known set File duplicity-new- signatures.20131111T011109Z.to.20131127T214428Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20131111T011109Z.to.20131127T214428Z.sigtar.gpg' File duplicity-new- signatures.20131127T214428Z.to.20131128T001021Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20131127T214428Z.to.20131128T001021Z.sigtar.gpg' Found backup chain [Mon Nov 11 02:11:09 2013]-[Mon Nov 11 02:11:09 2013] Added incremental Backupset (start_time: Mon Nov 11 02:11:09 2013 / end_time: Wed Nov 27 22:44:28 2013) Added set Wed Nov 27 22:44:28 2013 to pre-existing chain [Mon Nov 11 02:11:09 2013]-[Wed Nov 27 22:44:28 2013] Added incremental Backupset (start_time: Wed Nov 27 22:44:28 2013 / end_time: Thu Nov 28 01:10:21 2013) Added set Thu Nov 28 01:10:21 2013 to pre-existing chain [Mon Nov 11 02:11:09 2013]-[Thu Nov 28 01:10:21 2013] Last full backup date: Mon Nov 11 02:11:09 2013 Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /root/.cache/duplicity/fdd725b8342168a02dea8939d396fc90 Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Mon Nov 11 02:11:09 2013 Chain end time: Thu Nov 28 01:10:21 2013 Number of contained backup sets: 3 Total number of contained volumes: 12 Type of backup set: Time: Num volumes: Full Mon Nov 11 02:11:09 2013 8 Incremental Wed Nov 27 22:44:28 2013 3 Incremental Thu Nov 28 01:10:21 2013 1 ------------------------- No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: Registering (mktemp) temporary file /tmp/duplicity-L0anjV-tempdir/mktemp- lABFYV-2 Registering (mktemp) temporary file /tmp/duplicity-L0anjV-tempdir/mktemp- CipsuY-3 Registering (mktemp) temporary file /tmp/duplicity-L0anjV-tempdir/mktemp- pGWXBO-4 Writing Desktop of type dir Making directory dest/Desktop [...snip...] Deleting /tmp/duplicity-L0anjV-tempdir/mktemp-lABFYV-2 Forgetting temporary file /tmp/duplicity-L0anjV-tempdir/mktemp-lABFYV-2 Processed volume 1 of 12 Registering (mktemp) temporary file /tmp/duplicity-L0anjV- tempdir/mktemp-k6uzD3-5 Deleting /tmp/duplicity-L0anjV-tempdir/mktemp-k6uzD3-5 Forgetting temporary file /tmp/duplicity-L0anjV-tempdir/mktemp-k6uzD3-5 Processed volume 2 of 12 Registering (mktemp) temporary file /tmp/duplicity-L0anjV-tempdir/mktemp- RuQ4T8-6 Deleting /tmp/duplicity-L0anjV-tempdir/mktemp-RuQ4T8-6 Forgetting temporary file /tmp/duplicity-L0anjV-tempdir/mktemp-RuQ4T8-6 Processed volume 3 of 12 Registering (mktemp) temporary file /tmp/duplicity-L0anjV- tempdir/mktemp-64oBHh-7 Deleting /tmp/duplicity-L0anjV-tempdir/mktemp-64oBHh-7 Forgetting temporary file /tmp/duplicity-L0anjV-tempdir/mktemp-64oBHh-7 Processed volume 4 of 12 Registering (mktemp) temporary file /tmp/duplicity-L0anjV-tempdir/mktemp- _M17PI-8 Writing Desktop/Tresorit.lnk of type reg [...snip...] Removing still remembered temporary file /tmp/duplicity-L0anjV- tempdir/mkstemp-cSTvG5-1 Removing still remembered temporary file /tmp/duplicity-L0anjV- tempdir/mktemp-pGWXBO-4 Removing still remembered temporary file /tmp/duplicity-L0anjV- tempdir/mktemp-CipsuY-3 Removing still remembered temporary file /tmp/duplicity-L0anjV- tempdir/mktemp-_M17PI-8 Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1466, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1459, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1393, in main restore(col_stats) File ""/usr/local/bin/duplicity"", line 687, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 528, in Write_ROPaths ITR( ropath.index, ropath ) File ""/usr/local/lib/python2.7/dist-packages/duplicity/lazy.py"", line 335, in __call__ last_branch.fast_process, args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/robust.py"", line 37, in check_common_error return function(*args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 581, in fast_process ropath.copy( self.base_path.new_index( index ) ) File ""/usr/local/lib/python2.7/dist-packages/duplicity/path.py"", line 444, in copy self.copy_attribs(other) File ""/usr/local/lib/python2.7/dist-packages/duplicity/path.py"", line 449, in copy_attribs util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) File ""/usr/local/lib/python2.7/dist-packages/duplicity/util.py"", line 65, in maybe_ignore_errors return fn() File ""/usr/local/lib/python2.7/dist-packages/duplicity/path.py"", line 449, in util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) OverflowError: Python int too large to convert to C long ^[^CException KeyboardInterrupt in ignored ``` Original tags: encrypted gnupg local+repository restore",6 118020554,2013-11-02 08:57:36.852,No Option for Preserving Times? (lp:#1247347),"[Original report](https://bugs.launchpad.net/bugs/1247347) created by **Lonnie Lee Best (launchpad-startport)** ``` Zentyal uses duplicity to perform backups, and I just restored one of those backups using deja dub. I restored not the latests files, but to a previous point in time. After restoring the files, I was going to then check them to see if the files were actually from the time I restored to, but without looking at the content of particular files I was unable to tell. You see, all files restored had time-stamps of at the time they were restored instead of the times they had at the backup source. See this screen-shot: http://neartalk.com/ss/2013-11-02_03:24:37.png Rsync has a --times argument for preserving the times associated with a source file. I was unable to find such an argument in duplicity. Am I over looking something, or is this a legitimate enhancement request? ``` Original tags: saucy",8 118020534,2013-11-01 20:59:48.171,"Backup fails with error: ""Error setting permissions: Function not implemented"" (lp:#1247276)","[Original report](https://bugs.launchpad.net/bugs/1247276) created by **Will Palmer (wmpalmer)** ``` I think the tool is: deja-dup 27.3.1 Attempting to backup to a ""My Passport"" drive completes the ""scanning"" phase as usual, begins copying files for a few moments, then fails with the following error message: Error setting permissions: Function not implemented I do not see any further details along with this error message. I have been successfully backing up for two weeks with this tool, but this week it gave the error message and went no further. I have recently upgraded to Ubuntu 3.10 ```",20 118020440,2013-10-29 08:09:20.850,"Crash with stacktrace, error code 30 in validate_encryption_settings (lp:#1245805)","[Original report](https://bugs.launchpad.net/bugs/1245805) created by **Damien Cassou (cassou)** ``` While running duplicity through duply, I get: [...] Found primary backup chain with matching signature chain: ------------------------- Chain start time: Fri Oct 25 17:41:51 2013 Chain end time: Fri Oct 25 17:41:51 2013 Number of contained backup sets: 1 Total number of contained volumes: 13 Type of backup set: Time: Num volumes: Full Fri Oct 25 17:41:51 2013 13 ------------------------- No orphaned or incomplete backup sets found. Reuse configured PASSPHRASE as SIGN_PASSPHRASE RESTART: Volumes 13 to 473 failed to upload before termination. Restarting backup at volume 13. Removing still remembered temporary file /tmp/duplicity-H1PRj7-tempdir/mkstemp-Bvibdh-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1411, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1404, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1374, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 509, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 346, in write_multivol validate_encryption_settings(globals.restart.last_backup, mf) File ""/usr/bin/duplicity"", line 325, in validate_encryption_settings if vol1_filename != backup_set.volume_name_dict[1]: KeyError: 1 09:03:40.631 Task 'BKP' failed with exit code '30'. ProblemType: Bug DistroRelease: Ubuntu 13.10 Package: duplicity 0.6.21-0ubuntu4 ProcVersionSignature: Ubuntu 3.11.0-12.19-generic 3.11.3 Uname: Linux 3.11.0-12-generic x86_64 NonfreeKernelModules: wl ApportVersion: 2.12.5-0ubuntu2.1 Architecture: amd64 Date: Tue Oct 29 09:04:56 2013 InstallationDate: Installed on 2013-10-21 (7 days ago) InstallationMedia: Ubuntu 13.10 ""Saucy Salamander"" - Release amd64 (20131016.1) MarkForUpload: True SourcePackage: duplicity UpgradeStatus: No upgrade log present (probably fresh install) ``` Original tags: amd64 apport-bug saucy",10 118020438,2013-10-21 03:29:33.128,Connection issue restarts entire backup (lp:#1242507),"[Original report](https://bugs.launchpad.net/bugs/1242507) created by **krbvroc1 (kbass)** ``` During a nightly incremental backup there was a connection issue during the backup. This corrupted/restarted the backup. Scenario: Upload of volume was OK Upload of new signatures was OK Upload of manifest FAILED due to 'No route to host'. BackendException upload failed aborted. On the next nightly incremental backup, I would expect recovery. Instead I got: Deleting local manifest (not authoritative at backend) and then on version 0.6.21, I got ERROR 30 AssertionError . Traceback (most recent call last): . File ""/usr/bin/duplicity"", line 1411, in ? . with_tempdir(main) . File ""/usr/bin/duplicity"", line 1404, in with_tempdir . fn() . File ""/usr/bin/duplicity"", line 1286, in main . globals.archive_dir).set_values() . File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 691, in set_values . (backup_chains, self.orphaned_backup_sets, . File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 816, in get_backup_chains . map(add_to_sets, filename_list) . File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 806, in add_to_sets . if set.add_filename(filename): . File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 93, in add_filename . self.set_manifest(filename) . File ""/usr/lib64/python2.4/site-packages/duplicity/collections.py"", line 123, in set_manifest . assert not self.remote_manifest_name, (self.remote_manifest_name, . AssertionError: ('duplicity- inc.20130823T053006Z.to.20130824T054019Z.manifest.part', u'duplicity- inc.20130823T053006Z.to.20130824T054019Z.manifest.gpg') When I tried to reproduce this with the current dev tree (rev 930), instead of reproducing the AssertionError, I instead got: No orphaned or incomplete backup sets found. RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. To reproduce: What I did was set a breakpoint in backend put (upload) routine. On each breakpoint I examine the file name being uploaded and when it was the manifest, I caused the network connection to fail. (In my case I modified /etc/hosts to redirect the backup domain to an IP that returns connection refused or no route to host. After duplicity exits with the backend exception, I restore the connection and the RESTART happens. For a multi GB nightly incremental backup, a RESTART of the entire archive is drastic due to one sessions internet connection issues. I'm not familiar with the design but some thoughts: 1) When this happens, why not use the local file rather than deleting it as non authoritative? 2) For an incremental backup, would it be possible to delete the last incomplete increment and continue backup from there? 3) Why does it say 'No orphaned or incomplete backup sets found.' Wouldn't this scenario of a missing manifest be an incomplete backup set? ```",6 118020424,2013-10-09 20:03:43.409,Déjà-Dup 26.0 full-backup crashes and start again over and over (lp:#1237631),"[Original report](https://bugs.launchpad.net/bugs/1237631) created by **Adrien Robin (ninoaderri)** ``` From a fresh install of Ubuntu 13.04, I started a full-backup of my data as I used to do with Ubuntu 12.04. But at a certain point, the backup crashes and starts from the beginning again with the pop-up asking me if I want to encrypt the data. The output log looks regular until the 1909th volume: for this volume, after having been written to the disk, and after the temp file has been deleted, duplicity tries to remove it instead of forgetting it as it was the case for the previous volumes. This leads to a crash. Description: Ubuntu 13.04 deja-dup 26.0-0ubuntu1 duplicity 0.6.21-0ubuntu1.1 org.gnome.DejaDup backend 'file' org.gnome.DejaDup delete-after 0 org.gnome.DejaDup exclude-list ['$TRASH', '$DOWNLOAD', '/home/adrien/Sauvegardes'] org.gnome.DejaDup full-backup-period 90 org.gnome.DejaDup include-list ['$HOME'] org.gnome.DejaDup last-backup '' org.gnome.DejaDup last-restore '' org.gnome.DejaDup last-run '' org.gnome.DejaDup nag-check '' org.gnome.DejaDup periodic false org.gnome.DejaDup periodic-period 7 org.gnome.DejaDup prompt-check '2013-10-02T14:38:22.157279Z' org.gnome.DejaDup root-prompt true org.gnome.DejaDup welcomed true org.gnome.DejaDup.File icon '' org.gnome.DejaDup.File name '' org.gnome.DejaDup.File path 'file:///home/adrien/Sauvegardes' org.gnome.DejaDup.File relpath@ ay [] org.gnome.DejaDup.File short-name '' org.gnome.DejaDup.File type 'normal' org.gnome.DejaDup.File uuid '' org.gnome.DejaDup.Rackspace container 'PC-Adrien' org.gnome.DejaDup.Rackspace username '' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder 'PC-Adrien' org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.U1 folder '/deja-dup/PC-Adrien' ```",10 118020419,2013-09-21 23:43:17.644,doesn't handle broken output pipe (lp:#1228722),"[Original report](https://bugs.launchpad.net/bugs/1228722) created by **Steven Barre (slashterix)** ``` If I pipe the output of 'duplicity list-current-files' to another program, and that program dies, I get this error for each output line left to be written to the pipe # duplicity list-current-files | php myscript.php IOError: [Errno 32] Broken pipe Traceback (most recent call last): File ""/usr/lib64/python2.6/logging/__init__.py"", line 800, in emit self.flush() File ""/usr/lib64/python2.6/logging/__init__.py"", line 762, in flush self.stream.flush() Duplicity should handle this error and exit, instead of continuing to dump data to the pipe. Duplicity version 0.6.21 Python version 2.6.6 OS Distro and version CentOS 6.4 x86_64 ```",6 118020417,2013-09-18 16:21:23.576,progress exception (lp:#1227226),"[Original report](https://bugs.launchpad.net/bugs/1227226) created by **Mark Hesel (markhesel)** ``` duplicity 0.6.22 Python 2.6.4 Ubuntu 9.10 (Karmic Koala) when using the --progress feature, and after this it is not working probably. root@server:~/duplicity# ./backup Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Password for 'root@1.2.3.4': Local and Remote metadata are synchronized, no sync needed. Warning, found incomplete backup sets, probably left from aborted session Last full backup date: none No signatures found, switching to full backup. Exception in thread Thread-2: Traceback (most recent call last): File ""/usr/lib/python2.6/threading.py"", line 525, in __bootstrap_inner self.run() File ""/usr/local/lib/python2.6/dist-packages/duplicity/progress.py"", line 353, in run tracker.log_upload_progress() File ""/usr/local/lib/python2.6/dist-packages/duplicity/progress.py"", line 267, in log_upload_progress self.time_estimation = long(projection * float(self.elapsed_sum.total_seconds())) AttributeError: 'datetime.timedelta' object has no attribute 'total_seconds' ```",10 118020415,2013-09-10 14:24:26.024,Client Certificate support for webdavs backend (lp:#1223384),"[Original report](https://bugs.launchpad.net/bugs/1223384) created by **Nicklas Björk (nicklas-3)** ``` I have a situation where I would like to use the webdavs backend to store Duplicity dumps. The catch is that the server is using SSL client certificates for authentication, which does not seem to be supported by Duplicity. Some quick research shows that it seems to be doable using urllib2 (http://www.osmonov.com/2009/04/client-certificates-with-urllib2.html). Would it be possible to implement this feature in Duplicity? ```",6 118020403,2013-09-05 09:51:45.677,duplicity should allow restoring a file without knowing the signing key (lp:#1221117),"[Original report](https://bugs.launchpad.net/bugs/1221117) created by **Robert Buchholz (rbu)** ``` Considering I have a backup and lost the signing key, but it was encrypted for a master key (that I still have), I would like to be able to restore files from that backup. However, ""duplicity restore"" fails because of a failing signature validation due to me missing the public signing key to the signature. It should allow to override / ignore this check when specifying an option, such as --ignore-errors ```",10 118019305,2013-09-03 19:22:10.044,should set nice level (lp:#1220396),"[Original report](https://bugs.launchpad.net/bugs/1220396) created by **Thomas Guettler (guettli-lp)** ``` If I run a backup, the whole system is slow. I think the backup process should set a nice level (and ionice), since it is a background job. The interactive processes (terminal, webbrowser ...) should not be slowed down by the backup process. ```",10 118020393,2013-08-06 10:16:40.658,Function remove-older-than fails with duplicity 0.6.21 (lp:#1208791),"[Original report](https://bugs.launchpad.net/bugs/1208791) created by **Samuel Bancal (samuel-bancal)** ``` Duplicity : from PPA 0.6.21-0ubuntu0ppa21~precise1 Python : 2.7.3 Ubuntu : 12.04.2 (64bits) Target : Ubuntu 12.04.2 (64bits) over ssh (iSCSI storage, EXT4) On several servers, I did daily backups, followed by remove-older-than 1Y (without --force). Yeasterday I noticed that old backups didn't disapeared so I added --force (I guess this information won't help ... but in case...) Now Every time I run : sudo duplicity remove-older-than 1Y --force -v2 --no-encryption --ssh- options=-oIdentityFile=/home/user/.ssh/id_rsa_backups_xxx scp://h_xxx@bkp_srv.domain.org/backup I get an error like this : sftp rm duplicity-inc.20120412T004301Z.to.20120413T004301Z.manifest failed: [Errno 2] No such file (Try 2 of 5) Will retry in 10 seconds. sftp rm duplicity-inc.20120412T004301Z.to.20120413T004301Z.manifest failed: [Errno 2] No such file (Try 3 of 5) Will retry in 10 seconds. sftp rm duplicity-inc.20120412T004301Z.to.20120413T004301Z.manifest failed: [Errno 2] No such file (Try 4 of 5) Will retry in 10 seconds. sftp rm duplicity-inc.20120412T004301Z.to.20120413T004301Z.manifest failed: [Errno 2] No such file The mentionned file does exist on the backup server and in the .cache/duplicity/xxx/ local folder. After that run, both files are removed and I can run it again to discover the next file it attempts to remove. I tried with both paramiko 1.7.7.1-2ubuntu1 (from Ubuntu package) and with 1.11.0 (installed with pip) Finally I tried to downgrade duplicity to 0.6.18-0ubuntu3.1 (default package for Ubuntu 12.04) ... I get another warning/error message (Import of duplicity.backends.giobackend Failed: No module named gio) ... but it works! ```",20 118020386,2013-07-04 20:36:51.886,Add support for long-term storage with restore delays (lp:#1197958),"[Original report](https://bugs.launchpad.net/bugs/1197958) created by **Julien Fastré (julien-w)** ``` Hi, A lot of service providers are offering long-term storage service. This kind of storage is very cheap, but restoring a file force a delay which may last a couple of hours. I am thinking about Amazon Glacier, but also OVH ""Personal Cloud Archive"" (http://www.ovh.com/fr/cloud/archives/) My clients are very interested by using this kind of backup. I would like to use a script to make incremental and encrypted backup, and I am thinking about adapting Duplicity for this. I searched within bug reports, but except bug 1039511, I did not see any report of a discussion on the implementation of a backend adapted to those kind of backup. I never contributed to Duplicity, and I had a glance at the code, and it seems to me quite easy to understand. I see those problem: 1. Duplicity need to have access to the ""manifest"" files (for restoring from another machine, or for checking if the cache need a sync before upload). I was thinking about separating those manifest files from the data files, adding an option --manifest-site(or something like that)= For instance: duplicity --manifest-file=s3+http//my_bucket/ /home/me sftp://uid@other.host/some_dir If the ""--manifest-site"" option is detected, the manifests files would be send/retrieved to/from this place (s3+http//my_bucket/ in my example) instead of the usual place (sftp://uid@other.host/some_dir in my example). But I could not see in the code where this should be changed. 2. For restoring the file from such ""glacier"", we should take into account the delay of restoring file. I was thinking about adapting both the script and the backend class. In the backend class, I was thinking about adding a function ""hasDelay"". The return value would be a boolean, false by default. If the response is false, the script would continue as usually. If the response is true, then duplicity would execute the function ""prepareFileForGet"" with filename as an option. After asking the backend class to prepare the file, a loop would be executed every 2 minutes and to ask the backend whether the requested files are ready for dowload (backend.isReadyForDownload(filename)). The result would be a boolean. If a file is ready, this file would be downloaded, and the operation repeated until every files would be ready. Do you think this feature would be useful ? Do you expect other problems or collisions ? I would be happy with comments and improvements... Regards, Julien Fastré ``` Original tags: amazon glacier",12 118020380,2013-06-21 16:08:42.603,New Feature Request: list-current-files just top directories (lp:#1193410),"[Original report](https://bugs.launchpad.net/bugs/1193410) created by **Bradley (jbradley-whited)** ``` Currently, list-current-files lists everything. It'd be nice to have an option to just list top directories for quick inspection. Thanks ``` Original tags: wishlist",6 118020366,2013-05-12 23:36:55.817,python-cloudfiles deprecated (lp:#1179322),"[Original report](https://bugs.launchpad.net/bugs/1179322) created by **Jonathan Krauss (jkrauss)** ``` Rackspace has deprecated python-cloudfiles in favor of their pyrax library, which consolidates all Rackspace Cloud API functionality into a single library. Attached is a simple backend for pyrax, ported over from cloudfiles. I tested it with Duplicity 0.6.21 on both Arch Linux and FreeBSD 8.3.0. ```",12 118020359,2013-05-10 10:44:50.646,Implement command line tab completion (lp:#1178619),"[Original report](https://bugs.launchpad.net/bugs/1178619) created by **Karl Maier (w-wall2001)** ``` It would be a great usability improvement if pressing the TAB key would reveal possible acitions and options on the command line while in the middle of typing a command. ```",6 118020353,2013-04-24 08:10:28.632,Duplicity deletes contents of cache on S3 network error (lp:#1172170),"[Original report](https://bugs.launchpad.net/bugs/1172170) created by **Tristan Seligmann (mithrandi)** ``` Duplicity version: 0.6.18 Python version: 2.6.6 Distribution: Debian 6.0.7 (""squeeze"") I've had this happen a few times, I think, but only realised what the issue was on the most recent attempt. If my scheduled Duplicity backup runs during a network outage, it looks like it fails to resolve the bucket DNS entry, concludes the bucket does not exist, and tries to create it. Unfortunately, before trying this, it deletes every file in the cache due to being not present on the backend. My cron output looks something like this (with my details censored out): Deleting local /home/[CENSORED]/.cache/duplicity/[CENSORED]/duplicity-full- signatures.20110827T170252Z.sigtar.gz (not authoritative at backend). [... repeated for every file in the cache ...] Last full backup date: none Last full backup is too old, forcing full backup Failed to create bucket (attempt #1) '[CENSORED]' failed (reason: gaierror: [Errno -2] Name or service not known) Unfortunately I have an S3 lifecycle rule set up to transition objects to Glacier that are older than a certain age, so recovering from this requires restoring all the manifests / signatures / etc. so that Duplicity can cache them again, which is a somewhat tedious and time-consuming manual process. I don't have -v9 output since this was an unattended automated job run from cron. ```",6 118020324,2013-04-08 06:48:06.225,duplicity crashed with SIGSEGV in g_file_copy() (lp:#1166019),"[Original report](https://bugs.launchpad.net/bugs/1166019) created by **Kenneth Loafman (kenneth-loafman)** ``` . ProblemType: Crash DistroRelease: Ubuntu 13.04 Package: duplicity 0.6.21-0ubuntu1 ProcVersionSignature: Ubuntu 3.8.0-16.26-generic 3.8.5 Uname: Linux 3.8.0-16-generic x86_64 ApportVersion: 2.9.2-0ubuntu5 Architecture: amd64 Date: Sat Apr 6 03:00:37 2013 ExecutablePath: /usr/bin/duplicity InterpreterPath: /usr/bin/python2.7 MarkForUpload: True ProcCmdline: /usr/bin/python /usr/bin/duplicity --include=/home/username/.cache/deja-dup/metadata --exclude=/home/username/photos --exclude=/home/username/Downloads --exclude=/home/username/.local/share/Trash --exclude=/home/username/.cache/deja-dup/tmp --exclude=/home/username/.xsession-errors --exclude=/home/username/.thumbnails --exclude=/home/username/.Private --exclude=/home/username/.gvfs --exclude=/home/username/.adobe/Flash_Player/AssetCache --exclude=/home/username/.cache/deja-dup --exclude=/home/username/.cache --include=/home/username --exclude=/sys --exclude=/run --exclude=/proc --exclude=/var/tmp --exclude=/tmp --exclude=** --gio --volsize=25 / ftp://anonymous@lacie-2big/Public/backup --no-encryption --verbosity=9 --gpg-options=--no-use-agent --archive-dir=/home/username/.cache/deja-dup --tempdir=/home/username/.cache/deja-dup/tmp --log-fd=21 ProcEnviron: PATH=(custom, user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SegvAnalysis: Segfault happened at: 0x7f60273fc175 : mov (%rax),%eax PC (0x7f60273fc175) ok source ""(%rax)"" (0x00000000) not located in a known VMA region (needed readable region)! destination ""%eax"" ok SegvReason: reading NULL VMA Signal: 11 SourcePackage: duplicity StacktraceTop: g_file_copy () from /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 ffi_call_unix64 () from /usr/lib/x86_64-linux-gnu/libffi.so.6 ffi_call () from /usr/lib/x86_64-linux-gnu/libffi.so.6 g_callable_info_invoke () from /usr/lib/libgirepository-1.0.so.1 g_function_info_invoke () from /usr/lib/libgirepository-1.0.so.1 Title: duplicity crashed with SIGSEGV in g_file_copy() UpgradeStatus: Upgraded to raring on 2012-10-19 (168 days ago) UserGroups: adm admin cdrom dialout libvirtd lpadmin mythtv plugdev sambashare ``` Original tags: amd64 apport-crash apport-failed-retrace raring",20 118020322,2013-04-06 01:10:24.965,AttributeError: 'list' object has no attribute 'startswith' when using verify (lp:#1165263),"[Original report](https://bugs.launchpad.net/bugs/1165263) created by **iceflatline (iceflatline)** ``` I am encountering the following error when running the following verify command on FreeBSD 9.1-RELEASE: /usr/local/bin/duplicity verify --include /mnt/files/backup/ --exclude '**' scp://backup@192.168.20.6//mnt/files/weekly/ /mnt/files/ Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1411, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1404, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1257, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/lib/python2.7/site-packages/duplicity/commandline.py"", line 1013, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/local/lib/python2.7/site-packages/duplicity/commandline.py"", line 906, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/local/lib/python2.7/site-packages/duplicity/backend.py"", line 161, in get_backend return _backends[pu.scheme](pu) File ""/usr/local/lib/python2.7/site- packages/duplicity/backends/_ssh_paramiko.py"", line 159, in __init__ self.config['identityfile']) File ""/usr/local/lib/python2.7/posixpath.py"", line 252, in expanduser if not path.startswith('~'): AttributeError: 'list' object has no attribute 'startswith' ```",8 118020317,2013-04-05 20:41:52.026,Automatically make a log file and store it next to the backup files (lp:#1165192),"[Original report](https://bugs.launchpad.net/bugs/1165192) created by **Otto Kekäläinen (otto)** ``` I'd like to be able to view log information at the backup target, not source. Current use of option --log-file and --log-fd are both related to logging at the source, i.e. logging at the computer where the backup if made. I'd also like to have logs at the target, i.e. logs saved at the second computer (or folder) where the backups where sent. Compare to rdiff-backup: in the target directory, there is a subdirectory ""rdiff-backup"" that contains a few log files. Feature request: please automatically write a small log in the backup target directory. Looking at the log an administrator should easily be able to see when the last run was made, if there where any errors and in case some runs had errors, the log should enable the admin to easily check the time of the last fully successful log. ```",8 118022936,2013-03-22 21:02:33.171,fails to backup (lp:#1158984),"[Original report](https://bugs.launchpad.net/bugs/1158984) created by **Timski (stuben-rein)** ``` deja-dup backups some files and then exits with following error message Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1411, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1404, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1379, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 509, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 386, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 327, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 320, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/home/tim/.cache/deja- dup/tmp/duplicity-eoJhZh-tempdir/mktemp-RpMdBq-24' I'm using Ubuntu Raring Ringtail (development branch) deja-dup 25.5-0ubuntu1 duplicity 0.6.21-0ubuntu1 ---------------------------------- /tmp/deja-dup.gsettings org.gnome.DejaDup backend 'file' org.gnome.DejaDup delete-after 0 org.gnome.DejaDup exclude-list ['/home/tim/.local/share/Trash'] org.gnome.DejaDup full-backup-period 90 org.gnome.DejaDup include-list ['/home/tim'] org.gnome.DejaDup last-backup '2013-03-22T12:09:33.361212Z' org.gnome.DejaDup last-restore '2012-12-22T02:01:09.090993Z' org.gnome.DejaDup last-run '2013-03-22T12:09:33.361212Z' org.gnome.DejaDup nag-check '2013-01-24T05:03:32.879047Z' org.gnome.DejaDup periodic true org.gnome.DejaDup periodic-period 7 org.gnome.DejaDup prompt-check '2013-01-23T23:23:12.723530Z' org.gnome.DejaDup root-prompt true org.gnome.DejaDup welcomed true org.gnome.DejaDup.File icon '. GThemedIcon drive-harddisk-usb drive- harddisk drive' org.gnome.DejaDup.File name 'WDC WD75 00BPKT-00PK4T0: Beate' org.gnome.DejaDup.File path '/home/tim/deja-dup' org.gnome.DejaDup.File relpath b'brussel' org.gnome.DejaDup.File short-name 'Beate' org.gnome.DejaDup.File type 'volume' org.gnome.DejaDup.File uuid '1425bdc1-3529-443d-a72a-fa67704d02d7' org.gnome.DejaDup.Rackspace container 'brussel' org.gnome.DejaDup.Rackspace username '' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder 'brussel' org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.U1 folder '/deja-dup/brussel' ---------------------------------- can't find a /tmp/deja-dup.log ```",10 118020314,2013-03-07 12:49:11.937,RFE: Support v2.0 of CLOUDFILES_AUTHURL (lp:#1152145),"[Original report](https://bugs.launchpad.net/bugs/1152145) created by **nodata (ubuntu-nodata)** ``` duplicity only supports version 1.0 of the cloud files authentication scheme, e.g. CLOUDFILES_AUTHURL=https://auth.api.rackspacecloud.com/v1.0 This is deprecated, as can be seen here: Would it be possible to get version 2.0 support? ```",6 118020311,2013-02-20 18:14:10.579,Online documentation and man page needs updating (lp:#1130819),"[Original report](https://bugs.launchpad.net/bugs/1130819) created by **duebbert (kai-i)** ``` I was tearing my hair out because S3 backups didn't work in latest 0.6.21 version only to find by accident that there are options which are not documented online or in the man page, e.g. --s3-use-multiprocessing (without this switch S3 backup doesn't work but I'll file a separate bug for that.) Please update the documentation to show all options that are available. ```",6 118020302,2013-02-20 12:47:02.941,cloudfiles backend slow (lp:#1130649),"[Original report](https://bugs.launchpad.net/bugs/1130649) created by **Soren Hansen (soren)** ``` I noticed that the cloudfiles backend was incredibly slow. After poking around for a bit, I've realised that the culprit is the call to socket.getdefaulttimeout() in backend.py. I created a simple test script to upload a 100 MB file to Cloud Files. It does pretty much exactly what the cloudfiles backend does: import os import socket from cloudfiles import Connection from cloudfiles.errors import ResponseError from cloudfiles import consts conn_kwargs = {} conn_kwargs['username'] = os.environ['CLOUDFILES_USERNAME'] conn_kwargs['api_key'] = os.environ['CLOUDFILES_APIKEY'] conn_kwargs['authurl'] = consts.default_authurl conn = Connection(**conn_kwargs) container = conn.create_container('speedtest') sobject = container.create_object('100Mtest2rnd') sobject.load_from_filename('100Mtest.rnd') If I run it like that, it takes around 15 seconds to upload 100 MB. If I add a call to socket.setdefaulttimeout(30) before the Connection call, it takes 11 *minutes*. If I strace the two runs, I see a call to poll() before each write(). This gets added by Python's socketmodule.c due to the defaulttimeout. I tried adding timestamps to the strace log and counted how many system calls each of the two runs makes over the course of a single second while transferring the data. With the default sockettimeout, I got just over 100 system calls (poll, write, read, poll, write, read, etc.). It's dealing with a block size of 4kB, so that's 4kB*(100/3) = 133 kB/s. Without the default socket timeout, I got around 1120 system calls (read, write, read, write, etc.). That translates to 4kB*(1120/2) = 2240 kB/s. That's a pretty hefty difference. What confuses me, though, is that cloudfiles.Connection.__init__() also calls socket.setdefaulttimeout (passing it a value of 5 by default). ```",6 118020299,2013-02-18 15:23:22.885,"SSH connection timeouts, duplicity does not respect --timeout option (lp:#1129188)","[Original report](https://bugs.launchpad.net/bugs/1129188) created by **Olivier - interfaSys (olivier-interfasys)** ``` Our backup server seems to now take a long time to go through the authentication process or something is wrong with duplicity. In order to determine if the long process is the problem, I've tried to raise the timeout limit, but duplicity doesn't seem to respect that. The backend used is scp (scp://hostname), but sftp doesn't work either. The options I use include: --timeout=160 --ssh-options='-oConnectTimeout=160' I even edited globals.py to see if it would change something. It didn't. The error message, using the wrapper duply, is: --- Start running command STATUS at 16:16:04.000 --- BackendException: ssh connection to hostname:22 failed: [Errno 64] Host is down 16:16:19.000 Task 'STATUS' failed with exit code '23'. --- Finished state FAILED 'code 23' at 16:16:19.000 - Runtime 00:00:15.000 --- It says the runtime is 15 seconds, which is way shorter than the timeout. Connecting to the server directly via scp or sftp works. ```",6 118020292,2013-02-14 14:14:42.920,Duplicity incremental backup archives much bigger than necessary (lp:#1125225),"[Original report](https://bugs.launchpad.net/bugs/1125225) created by **gcc (chris+ubuntu-qwirx)** ``` Duplicity on our mail server eats 3 GB of our 6 GB /var partition for its ""cache"". Every day, the incremental backup creates a new ~200 MB signatures file, such as duplicity-new- signatures.20130211T040207Z.to.20130212T040221Z.sigtar.gz. This file contains (large) signatures of not just the files that have changed, but a random selection of files which have NOT changed. For example, of our current set, the following files contain ~15 kB signatures of a file that has not changed since 2004: chris@one-mail(foo)$ for i in /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.*.sigtar.gz; do sudo tar tzvf $i signature/home/sytse/horde-2.2.5.tar.gz && echo ""yes: $i"" || echo ""no: $i""; done [sudo] password for chris: no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130120T040216Z.to.20130121T040206Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130121T040206Z.to.20130122T040206Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130122T040206Z.to.20130123T040214Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130123T040214Z.to.20130124T040206Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130124T040206Z.to.20130125T040211Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130125T040211Z.to.20130126T040205Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130126T040205Z.to.20130127T040212Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130127T040212Z.to.20130128T040210Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130128T040210Z.to.20130129T040212Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130129T040212Z.to.20130130T040208Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130130T040208Z.to.20130131T040221Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130131T040221Z.to.20130201T040208Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130201T040208Z.to.20130202T040209Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130202T040209Z.to.20130203T040210Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130203T040210Z.to.20130204T040205Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130204T040205Z.to.20130205T040209Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130205T040209Z.to.20130206T040207Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130206T040207Z.to.20130207T040210Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130207T040210Z.to.20130208T040210Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130208T040210Z.to.20130209T040219Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130209T040219Z.to.20130210T040211Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130210T040211Z.to.20130211T040207Z.sigtar.gz no: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130211T040207Z.to.20130212T040221Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130212T040221Z.to.20130213T040226Z.sigtar.gz yes: /var/cache/duplicity/fb4efb5e7e414ab421bd5fa2f90954de/duplicity-new- signatures.20130213T040226Z.to.20130214T040220Z.sigtar.gz Can anyone think why Duplicity might be storing this redundant information? As the incremental backups are fairly useless without the full backup on which they depend, I don't think redundant storage of unchanged signatures is necessary or helpful. Cheers, Chris. ```",6 118020289,2013-02-11 13:57:50.808,error message: Backup location ‘/tmp/duplicity-gbDXuc-tempdir/mktemp-LJFF1G-430’ does not exist. (lp:#1122066),"[Original report](https://bugs.launchpad.net/bugs/1122066) created by **John Kirby (john-kirbynet)** ``` Backup location ‘/tmp/duplicity-gbDXuc-tempdir/mktemp-LJFF1G-430’ does not exist. Message received after several hours running. First time I have tried to use the backup supplied as part of the Ubuntu 12.04 distribution. I created a directory on a stand alone HD, configured the storage and folder tabs, and kicked off the backup for the first time. ```",6 118020282,2013-02-01 12:34:59.881,Duplicity performance issue with /etc/localtime (lp:#1112450),"[Original report](https://bugs.launchpad.net/bugs/1112450) created by **Tomáš Varga (tomas-varga)** ``` Every 10 minutes my own daemon on my server executes duplicity to backup about 100 MB of data. When doing full backup, it's ok, but sometimes while doing incremental backup it takes at least 20 minutes and whole time consumes 100 % of one CPU's core. Some info about the data: It's a map from game named Minecraft. (Bug caused by data format?) The game server is running while it's doing the backup. I may not stop it. (Cause - file being sometimes overwritten during backup?) Info about backup directory: It's 2121464904 B large (2023,186 MB) and contains 3724 files. (Too much files in one directory?) Duplicity version (duplicity --version): duplicity 0.6.08b Python version (python --version): Python 2.6.6 OS (cat /etc/issue): Debian GNU/Linux 6.0 \n \l root@delorean:~/duplicity# nohup strace duplicity -v9 --no-encryption /home/gate/world file:///opt/backup >dup 2>str & [1] 3678 Every second there were about 4500 lines written into strace's output file. root@delorean:~/duplicity# du -sh * 1,8M dup 1,1G str root@delorean:~/duplicity# wc -l *      15442 dup   16614942 str   16630384 celkem root@delorean:~/duplicity# grep -vEc '^stat\(""/etc/localtime""' str 35628 The duplicity output file is attached and so is strace output file without stat(""/etc/localtime"", ...) syscalls (in .tar file). So, is this a bug or am I to rotate backups (move old to other folder) every few days? [edit] I'm not sure if the attachment got uploaded because I can't find it anywhere, so I'm pasting a link: http://213.129.147.24/dupstr.tar [/edit] ``` Original tags: bug cpu duplicity issue kernel localtime performance stat syscall",6 118020280,2013-01-28 22:12:35.655,Server connection dropped (lp:#1108284),"[Original report](https://bugs.launchpad.net/bugs/1108284) created by **Pivert (g-launchpad-pivert-org)** ``` Some transfer are made during every try, but I'm permanently having this kind of error: ..... Comparing ('herbert3.img',) and None Getting delta of (('herbert3.img',) /mnt/bkp_fde/current/vmstorage/vm- storage-pool/herbert3.img reg) and None A herbert3.img AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity- NoVY5L-tempdir/mktemp-1pbtjP-3 AsyncScheduler: running task synchronously (asynchronicity disabled) Deleting /tmp/duplicity-NoVY5L-tempdir/mktemp-1pbtjP-3 Forgetting temporary file /tmp/duplicity-NoVY5L-tempdir/mktemp-1pbtjP-3 AsyncScheduler: task completed successfully Processed volume 859 No handlers could be found for logger ""paramiko.transport"" Removing still remembered temporary file /tmp/duplicity- NoVY5L-tempdir/mkstemp-z5fIGD-1 Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1411, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1404, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1374, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 510, in full_backup sig_outfp.to_remote() File ""/usr/lib/python2.7/dist-packages/duplicity/dup_temp.py"", line 184, in to_remote globals.backend.move(tgt) #@UndefinedVariable File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 363, in move self.put(source_path, remote_filename) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/sshbackend.py"", line 191, in put raise BackendException(""sftp put of %s (as %s) failed: %s"" % (source_path.name,remote_filename,e)) BackendException: sftp put of /root/.cache/duplicity/86878d6befd060f5e96646260ca047da/duplicity-full- signatures.20130120T21255 duplicity-full-signatures.20130120T212556Z.sigtar.gpg) failed: Server connection dropped: BackendException: sftp put of /root/.cache/duplicity/86878d6befd060f5e96646260ca047da/duplicity-full- signatures.20130120T21255 duplicity-full-signatures.20130120T212556Z.sigtar.gpg) failed: Server connection dropped: Backup Ended ```",8 118020277,2013-01-09 17:14:22.680,Can't restore old backup that hasn't been updated for a long time (lp:#1097849),"[Original report](https://bugs.launchpad.net/bugs/1097849) created by **Carlo Fragni (carlofragni)** ``` I am trying to restore a backup from an old server that has been deactivated. The last time backup was executed was over six months ago. The backup files do exist but duplicity doesn't seem to find any backup chains because the last backup signature file dates from a long time ago. Using ubuntu 12.04 amd 64, python 2.7.3, duplicity 0.6.18 . Here is the -v9 output: $ duplicity -v9 --no-encryption --allow-source-mismatch s3+http:/// ~/restore/ Using archive dir: /home//.cache/duplicity/8d63fa05952285d6243341c30b9ad3b6 Using backup name: 8d63fa05952285d6243341c30b9ad3b6 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.18 (February 29, 2012) Args: /usr/bin/duplicity -v9 --no-encryption --allow-source-mismatch s3+http:/// /home//restore/ Linux 3.2.0-35-generic #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012 x86_64 x86_64 /usr/bin/python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] ================================================================================ Using temporary directory /tmp/duplicity-YSboMf-tempdir Registering (mkstemp) temporary file /tmp/duplicity-YSboMf-tempdir/mkstemp- AVob8O-1 Temp has 480694202368 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: /home//.cache/duplicity/8d63fa05952285d6243341c30b9ad3b6 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Removing still remembered temporary file /tmp/duplicity-YSboMf- tempdir/mkstemp-AVob8O-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1403, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1396, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1330, in main restore(col_stats) File ""/usr/bin/duplicity"", line 623, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/bin/duplicity"", line 645, in restore_get_patched_rop_iter backup_chain = col_stats.get_backup_chain_at_time(time) File ""/usr/local/lib/python2.7/dist-packages/duplicity/collections.py"", line 952, in get_backup_chain_at_time raise CollectionsError(""No backup chains found"") CollectionsError: No backup chains found ```",6 118020267,2013-01-06 01:47:18.094,Impossible to backup several directories without reading whole disk (lp:#1096492),"[Original report](https://bugs.launchpad.net/bugs/1096492) created by **vsespb (vi1tsr)** ``` Hello. I need to backup /etc and /home/vse/.rvm into single location. I assume correct way to do this is: duplicity --no-encryption --include /etc --include /home/vse/.rvm/ --exclude ""**"" / file:///home/duplicity Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. Error accessing possibly locked file /home/vse/.gvfs ""Error accessing possibly locked file /home/vse/.gvfs"" - I assume that means that it tries read whole disk (which is a bug cause it's slow and i can contain other mounted filesystems) ? Why it tries to access /home/vse/.gvfs if I need just /etc and /home/vse/.rvm ? (duplicity 0.6.08b Ubuntu 10.04 Python, EXT4) ```",10 118020226,2012-12-04 13:07:19.208,Backup cannot be restored (no signature chains found) (lp:#1086374),"[Original report](https://bugs.launchpad.net/bugs/1086374) created by **Guy Van Sanden (gvs)** ``` I get: [root@ ]# duplicity list-current-files scp://192.168.11.2/backup/caw- server1 Local and Remote metadata are synchronized, no sync needed. Last full backup date: none Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1391, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1384, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1322, in main list_current(col_stats) File ""/usr/bin/duplicity"", line 598, in list_current sig_chain = col_stats.get_signature_chain_at_time(time) File ""/usr/lib64/python2.6/site-packages/duplicity/collections.py"", line 977, in get_signature_chain_at_time raise CollectionsError(""No signature chains found"") CollectionsError: No signature chains found when trying to list files from my backup. But the files seem ok outside of duplicity, This affects multiple servers backing up to different locations. OS is CentOS 6.2 x86 Duplicity 0.6.18 from the repos --- Using archive dir: /root/.cache/duplicity/1dc0bb65e1eb97582ca301487786e1b1 Using backup name: 1dc0bb65e1eb97582ca301487786e1b1 Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: list-current ================================================================================ duplicity 0.6.18 (February 29, 2012) Args: /usr/bin/duplicity -v9 list-current-files --no-encryption scp://192.168.11.1/backup/caw-server2 Linux caw-server2.cawdekempen.be 2.6.32-220.23.1.el6.x86_64 #1 SMP Mon Jun 18 18:58:52 BST 2012 x86_64 x86_64 /usr/bin/python 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] ================================================================================ Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: SftpBackend Archive dir: /root/.cache/duplicity/1dc0bb65e1eb97582ca301487786e1b1 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Using temporary directory /tmp/duplicity-ckTuyx-tempdir Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1391, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1384, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1322, in main list_current(col_stats) File ""/usr/bin/duplicity"", line 598, in list_current sig_chain = col_stats.get_signature_chain_at_time(time) File ""/usr/lib64/python2.6/site-packages/duplicity/collections.py"", line 977, in get_signature_chain_at_time raise CollectionsError(""No signature chains found"") CollectionsError: No signature chains found ```",50 118022541,2012-12-03 17:55:48.548,Déjà Dup guesses the wrong hostname (lp:#1086068),"[Original report](https://bugs.launchpad.net/bugs/1086068) created by **Jeroen Hoek (mail-jeroenhoek)** ``` deja-dup 22.0-0ubuntu2 duplicity 0.6.18-0ubuntu3 Ubuntu 12.04.1 LTS Ubuntu currently creates /etc/hosts with these two lines, with ""computername"" being the name of my computer: 127.0.0.1 localhost 127.0.1.1 computername This works fine most of the time, but can cause problems with some software. As a work-around I have changed /etc/hosts to look like this: 127.0.0.1 localhost computername In both cases /etc/hostname and `hostname` say ""computername"", and computername is pingable. Now, this is perfectly valid, and should not have an effect on Déjà Dup. Unfortunately when I configure my /etc/hosts like this, Déjà Dup now thinks my computer is called ""localhost"" instead ""computername"", and warns me when I try to backup to a location containing previous backups filed under ""computername"". Why is Déjà Dup guessing my computer is called ""localhost""? Shouldn't `hostname` or /etc/hostname be authoritative in this case? ```",6 118020207,2012-12-02 21:02:13.562,trouble with webdavs with data more than 20Mb (lp:#1085720),"[Original report](https://bugs.launchpad.net/bugs/1085720) created by **Rolf Glei (rogle-le-deactivatedaccount)** ``` Hello, I've used duplicity for backup. Now my provider changed and I've started a new backup. Errors as shown below are given, if data will exceed something about 10-20Mb: Uploading data with duplicity: duplicity 0.6.20 (October 28, 2012) Args: /usr/local/bin/duplicity --volsize 5 --num-retries 10 /media/test/xxx webdavs://xxx@webdav.xxx.de/xxx /usr/bin/python 2.6.6 (r266:84292, Dec 27 2010, 00:02:40) [GCC 4.4.5] give errors as shown below (1-4) if the data will exceed about 20Mb 1) ---------- Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. WebDAV backend giving up after 10 attempts to PUT /xxx/duplicity- full...Z.vol8.difftar.gpg BackendException: (200, 'OK') 2) ---------- Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. WebDAV backend giving up after 10 attempts to PUT /xxx/duplicity- full...Z.vol30.difftar.gpg BackendException: (200, 'OK') 3) ---------- Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1403, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1396, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1371, in main full_backup(col_stats) File ""/usr/local/bin/duplicity"", line 501, in full_backup globals.backend) File ""/usr/local/bin/duplicity"", line 399, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/local/lib/python2.6/dist- packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/local/lib/python2.6/dist- packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/local/bin/duplicity"", line 398, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/local/bin/duplicity"", line 296, in put backend.put(tdp, dest_filename) File ""/usr/local/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 257, in put response = self.request(""PUT"", url, source_file.read()) File ""/usr/local/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 110, in request response = self.conn.getresponse() File ""/usr/lib/python2.6/httplib.py"", line 990, in getresponse response.begin() File ""/usr/lib/python2.6/httplib.py"", line 391, in begin version, status, reason = self._read_status() File ""/usr/lib/python2.6/httplib.py"", line 349, in _read_status line = self.fp.readline() File ""/usr/lib/python2.6/socket.py"", line 427, in readline data = recv(1) File ""/usr/lib/python2.6/ssl.py"", line 215, in recv return self.read(buflen) File ""/usr/lib/python2.6/ssl.py"", line 136, in read return self._sslobj.read(len) SSLError: The read operation timed out 4) ------------- Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. WebDAV backend giving up after 10 attempts to PUT /xxx/duplicity- full...Z.vol3.difftar.gpg BackendException: (412, 'Precondition Failed') ********************************************* Checking saved data with verify gives often errors like the shown below (5-6) (all actions were done with same value PASSPHRASE!) 5) ------------- Local and Remote metadata are synchronized, no sync needed. Last full backup date: Fri Nov 30 23:47:20 2012 GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: decrypt_message failed: eof ===== End GnuPG log ===== 6) Copying duplicity-full-signatures...Z.sigtar.gpg to local cache. Copying duplicity-full...Z.manifest.gpg to local cache. Last full backup date: Sat Dec 1 16:43:32 2012 Invalid data - SHA1 hash mismatch for file: duplicity-full...voll7.difftar.gpg Calculated hash: da39a3ee5e6b4b0d3255bfef95601890afd80709 Manifest hash: fbdfcc66b6900103c3b73580fee55324dac6c40d ********************************************** As an error occurs no data will be available for download: all is lost... - isn't it? Thanks lot for ideas... ```",6 118020201,2012-11-28 15:34:45.296,selecting testcase for otherFilesystem makes bad assumption (lp:#1084121),"[Original report](https://bugs.launchpad.net/bugs/1084121) created by **Thomas Eriksson (scrizt)** ``` duplicity-0.6.18 In /testing/tests/selectiontest.py, lines 249 - 272, there is an assumption that / and /usr/bin resides in the same filesystem. I'd say it's common that they do not, and suggest changing the test to use /bin instead of /usr/bin. Patch suggestion: --- duplicity-0.6.18.orig/testing/tests/selectiontest.py +++ duplicity-0.6.18/testing/tests/selectiontest.py @ @ -252,12 +252,12@ @ select = Select(root) sf = select.other_filesystems_get_sf(0) assert sf(root) is None - if os.path.ismount(""/usr/bin""): + if os.path.ismount(""/bin""): sfval = 0 else: sfval = None - assert sf(Path(""/usr/bin"")) == sfval, \ - ""Assumption: /usr/bin is on the same filesystem as /"" + assert sf(Path(""/bin"")) == sfval, \ + ""Assumption: /bin is on the same filesystem as /"" if os.path.ismount(""/dev""): sfval = 0 else: ```",6 118020197,2012-11-16 01:15:08.826,webdav backend: fails on 302/Redirect responses (lp:#1079475),"[Original report](https://bugs.launchpad.net/bugs/1079475) created by **az (az-debian)** ``` this is a forward of debian bug #693370 which lives over there: http://bugs.debian.org/693370 the original reporter can't use webdav with box.net because duplicity interprets the 302 Found that PROPFIND returns as an indication of failure. the debian bug report is for version 0.6.08, but the problem is also present in the newer versions incl. 0.6.20 as there have been no relevant changes to the webdav backend. regards az ```",6 118020191,2012-11-15 13:22:04.045,"--ignore-errors not working, bad variable name in commandline.py (lp:#1079183)","[Original report](https://bugs.launchpad.net/bugs/1079183) created by **Mikko Ohtamaa (mikko-red-innovation)** ``` In commandline.py: parser.add_option(""--ignore-errors"", action=""callback"", dest=""ignore_errors"", callback=lambda o, s, v, p: (log.Warn( _(""Running in 'ignore errors' mode due to %s; please "" ""re-consider if this was not intended"") % s), setattr(p.values, ""ignore errors"", True))) It is setting global variable ""ignore errors"" when it should be ""ignore_errors"", making --ignore-errors command line switch not working. Trunk version. ```",6 118020171,2012-11-11 14:38:27.047,GID wrong (0) for os.chown when copying or moving files (lp:#1077647),"[Original report](https://bugs.launchpad.net/bugs/1077647) created by **Ralf Herold (ralf-herold)** ``` Using duplicity 0.6.20, python 2.7.2, gpg 2.0.17 under Mac OS X 10.8, the following error is reproducible when trying to list remote backup (WebDAV) contents on a different computer from where the backup was made: Traceback (most recent call last):   File ""/opt/local/bin/duplicity"", line 1403, in     with_tempdir(main)   File ""/opt/local/bin/duplicity"", line 1396, in with_tempdir     fn()   File ""/opt/local/bin/duplicity"", line 1272, in main     sync_archive(decrypt)   File ""/opt/local/bin/duplicity"", line 1072, in sync_archive     copy_to_local(fn)   File ""/opt/local/bin/duplicity"", line 1021, in copy_to_local     tdp.move(globals.archive_dir.append(loc_name))   File ""/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/duplicity/path.py"", line 617, in move     self.copy(new_path)   File ""/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/duplicity/path.py"", line 443, in copy     self.copy_attribs(other)   File ""/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/duplicity/path.py"", line 448, in copy_attribs     util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid))   File ""/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/duplicity/util.py"", line 65, in maybe_ignore_errors     return fn()   File ""/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/duplicity/path.py"", line 448, in     util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) OSError: [Errno 1] Operation not permitted: '/Users//.cache/duplicity/duply_/duplicity-full- signatures.20120929T191652Z.sigtar.gz' In ""copy"" calling ""copy_attribs"", the self.stat.st_uid is correctly set (e.g., 504) but the self.stat.st_gid is 0, which would be root and this throws an OSError message as duplicity is run as normal user. I could not find where in duplicity/path.py the gid is determined; this may be an issue with the ROPath object? This is a temporary fix: In duplicity/path.py, line 448, change self.stat.st_gid to -1: - util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) + util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, -1)) (--ignore-errors does not fix the OSError.) I have seen an informal report for duply (http://niebegeg.net/post/32010111657/duplicity-backups-wiederherstellen) which mentions above issue as well but this may not have been submitted as bug. Many thanks - ```",12 118020168,2012-11-06 22:47:26.705,Import error after update to 0.6.20 (lp:#1075766),"[Original report](https://bugs.launchpad.net/bugs/1075766) created by **iceflatline (iceflatline)** ``` I am receiving the following error message on all duplicity commands after an update to duplicity-0.6.20 ""Import of duplicity.backends.u1backend Failed: No module named httplib2"" I am using FreeBSD 9.0-Release; duplicity-0.6.19_2 updated to duplicity-0.6.20 using portmaster. ```",8 118023052,2012-09-11 09:09:24.026,backup failed after moving backup location (lp:#1049002),"[Original report](https://bugs.launchpad.net/bugs/1049002) created by **Api (aa-pp-ii+launchpad)** ``` How to reproduce: - full backup to a remote SSH location A - then move the remote backup location to B and update the local cache files accordingly, so that dejadup continues with incremental backups wrt the previous full backup I followed this guide http://blog.linux2go.dk/2011/01/20/moving-duplicity- and-hence-deja-dup-backups/ - then run dejadup again: you expect an incremental backup to location B, instead you get the following error - as a side effect, DD deletes all local cache files, even those of other duplicity backups Ubuntu 12.04.1 LTS 64 bit deja-dup 22.0-0ubuntu2 duplicity 0.6.19-0ubuntu0ppa18~precise1 Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1391, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1384, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1264, in main     globals.archive_dir).set_values()   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 696, in set_values     self.get_backup_chains(partials + backend_filename_list)   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 819, in get_backup_chains     map(add_to_sets, filename_list)   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 809, in add_to_sets     if set.add_filename(filename):   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 97, in add_filename     (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20120625T080651Z.vol1.difftar.gpg', 2: 'duplicity-full.20120625T080651Z.vol2.difftar.gpg', 3: 'duplicity- ... ... ... full.20120625T080651Z.vol943.difftar.gpg', 944: 'duplicity- full.20120625T080651Z.vol944.difftar.gpg', 945: 'duplicity- full.20120625T080651Z.vol945.difftar.gpg', 946: 'duplicity- full.20120625T080651Z.vol946.difftar.gpg'}, 'duplicity- full.20120625T080651Z.vol606.difftar.gpg') ```",18 118020149,2012-08-10 15:14:47.784,Invalid Manifest after Restart (lp:#1035349),"[Original report](https://bugs.launchpad.net/bugs/1035349) created by **jocen Lwer (jocenlwer)** ``` Hallo, Versions: duplicity version 0.6.19, python 2.7.3, Ubuntu 12.04 (duplicity is from Ubuntu 12.10) Same Bug with 0.6.18 which is in Ubuntu 12.04 All files are in the zip, that is attached Used duply-profile 'conf' First I make a New Backup (run.1.log) an kill them, when it copies data. The Manifest 'duplicity-full.20120810T143459Z.manifest.part' look like: ... Volume 2: StartingPath 060521-002.MPG 6486 EndingPath 060521-002.MPG 12961 Hash SHA1 bd9e4caa974186f3201f15c518dfee46056a6683 Second I restart (run.2.log) with returned ok. But the Manifest look like: Volume 2: StartingPath 060521-002.MPG 6486 EndingPath 060521-002.MPG 12961 Hash SHA1 bd9e4caa974186f3201f15c518dfee46056a6683 Volume 2: StartingPath 060521-002.MPG 6486 EndingPath 060521-002.MPG 12961 Hash SHA1 dd8cbafc81d6d53eb2cb5df7be37fd3c397ad2cc And Third when I make a delta backup (run.3.log) (nothing is changed), it nit work correct! Ciao, Jocen ```",10 118020147,2012-07-13 13:31:35.257,"Archive failed (TypeError: %d format: a number is required, not NoneType in restart_position_iterator) (lp:#1024388)","[Original report](https://bugs.launchpad.net/bugs/1024388) created by **Anthony O. (netangel+launchpad)** ``` When saving my files, I've got that error : Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1241, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1234, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1207, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 416, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 262, in write_multivol restart_position_iterator(tarblock_iter) File ""/usr/bin/duplicity"", line 192, in restart_position_iterator (""/"".join(last_index), last_block, ""/"".join(tarblock_iter.previous_index)), TypeError: %d format: a number is required, not NoneType I'm using deja-dup (14.0.3-0ubuntu1). Duplicity version : 0.6.08b-0ubuntu2 Python version : 2.6.5-0ubuntu1 OS Distro and version : Ubuntu 10.04.4 LTS (Linux myhost 2.6.32-41-generic #91-Ubuntu SMP Wed Jun 13 11:43:55 UTC 2012 x86_64 GNU/Linux) ``` Original tags: archive deja-dup",6 118020145,2012-06-30 22:32:15.014,WebDav does not work 0.6.19 (lp:#1019678),"[Original report](https://bugs.launchpad.net/bugs/1019678) created by **Dmitry (diia)** ``` Duplicity version 0.6.19 Python version 2.7 OS Distro and version: Ubuntu Server 12.04 Type of target filesystem: webdav The command doesnt upload anything to my webdav storage, i get this error: Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1236, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1229, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1130, in main sync_archive() File ""/usr/local/bin/duplicity"", line 910, in sync_archive remlist = globals.backend.list() File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/webdavbackend.py"", line 168, in list response = self.request(""PROPFIND"", self.directory, self.listbody) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/webdavbackend.py"", line 110, in request response = self.conn.getresponse() File ""/usr/lib/python2.7/httplib.py"", line 1018, in getresponse raise ResponseNotReady() ResponseNotReady ```",12 118018933,2012-06-08 09:30:23.689,ftps/lftp backend ftp:ssl-protect-data should not be enforced (lp:#1010393),"[Original report](https://bugs.launchpad.net/bugs/1010393) created by **Eugene Crosser (crosser)** ``` Given that the files stored on the backend are encrypted anyway, there is no need to make data transfer SSL-protected. On the other hand, using SSL slows down transfter (there is especially bad case when LFTP is compiled against gnutls library). I think that unconditionally including ""set ftp:ssl-protect-data true"" is ill-adviced and should rather be removed. If the user really needs that he or she can include this directive into their .lftp/rc. ```",12 118020140,2012-06-04 06:17:39.732,OverflowError: join() is too long for a Python string (lp:#1008343),"[Original report](https://bugs.launchpad.net/bugs/1008343) created by **duplicity (duplicity)** ``` # duplicity --version duplicity 0.6.19 # duplicity incremental --no-encryption /storage file:///ba Local and Remote metadata are synchronized, no sync needed. Last full backup left a partial set, restarting. Last full backup date: Mon May 21 08:35:03 2012 RESTART: Volumes 62030 to 62030 failed to upload before termination. Restarting backup at volume 62030. Restarting after volume 62029, file data/data_backup/blogI.tar, block 7463456 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1391, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1384, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1354, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 500, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 346, in write_multivol restart_position_iterator(tarblock_iter) File ""/usr/bin/duplicity"", line 222, in restart_position_iterator while tarblock_iter.next(): File ""/usr/lib64/python2.4/site-packages/duplicity/diffdir.py"", line 504, in n ext result = self.process_continued(size) File ""/usr/lib64/python2.4/site-packages/duplicity/diffdir.py"", line 676, in p rocess_continued data, last_block = self.get_data_block(self.process_fp, size - 512) File ""/usr/lib64/python2.4/site-packages/duplicity/diffdir.py"", line 662, in g et_data_block if fp.close(): File ""/usr/lib64/python2.4/site-packages/duplicity/diffdir.py"", line 431, in c lose self.callback(self.sig_gen.getsig(), *self.extra_args) File ""/usr/lib64/python2.4/site-packages/duplicity/librsync.py"", line 214, in getsig return ''.join(self.sigstring_list) OverflowError: join() is too long for a Python string ```",12 118018869,2012-05-28 10:39:14.343,huge memory usage on big files (lp:#1005478),"[Original report](https://bugs.launchpad.net/bugs/1005478) created by **nocturo (nocturo)** ``` Hello, I'm running the following: duplicity 0.6.19 Python 2.4.3 CentOS 5.8 x64 this is all on local backend. Synopsis: When backing up LVM space with huge 140GB file (image file) duplicity is using huge amount of memory. root 2781 0.4 80.7 2500696 1269952 ? D May26 13:29 /usr/bin/python /usr/bin/duplicity --archive-dir /backup/node/ --name vm2842 --no-encryption --verbosity 4 --full-if-older-than 4W --volsize 100 --allow-source-mismatch --exclude-globbing-filelist /etc/duply/vm2842/exclude /mnt/lvm/vm2842 file:///backup/node/vm2842 free -m output: total used free shared buffers cached Mem: 1536 1526 9 0 0 14 -/+ buffers/cache: 1511 24 Swap: 6143 2462 3680 I've attached pmap reference as well. It's stuck at this file for a while: -rw------- 1 x x 138G May 23 08:45 /mnt/lvm/vm2842/home/x/imgs/msm-org- production-sparse-small.img ```",10 118020124,2012-05-23 19:25:03.743,"Fails ""silently"" trying to restore an Amazon S3 backup without clock in sync (lp:#1003615)","[Original report](https://bugs.launchpad.net/bugs/1003615) created by **Rodrigo Campos (rodrigocc)** ``` Hi! I just find out that trying to restore a backup from Amazon S3 in a PC that doesn't have the clock in sync, fails. The error it is not very descriptive of the situation. The error is the following: (ommited the S3 bucket) $ duplicity restore --s3-use-new-style -v9 --num-retries 5 's3+http:///' retore Using archive dir: /home/rata/.cache/duplicity/ca450b9de1f3f3afff546aee88a75d8c Using backup name: ca450b9de1f3f3afff546aee88a75d8c Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: restore ================================================================================ duplicity 0.6.18 (February 29, 2012) Args: /usr/bin/duplicity restore --s3-use-new-style -v9 --num-retries 5 s3+http:/// retore Linux lindsay 3.2.0-2-amd64 #1 SMP Sun Apr 15 16:47:38 UTC 2012 x86_64 /usr/bin/python 2.7.2+ (default, Nov 30 2011, 19:22:03) [GCC 4.6.2] ================================================================================ Using temporary directory /tmp/duplicity-1gR6jJ-tempdir Registering (mkstemp) temporary file /tmp/duplicity-1gR6jJ-tempdir/mkstemp- nElTiF-1 Temp has 75120640 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: /home/rata/.cache/duplicity/ca450b9de1f3f3afff546aee88a75d8c Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: Removing still remembered temporary file /tmp/duplicity-1gR6jJ- tempdir/mkstemp-nElTiF-1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1404, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1397, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1331, in main restore(col_stats) File ""/usr/bin/duplicity"", line 625, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/bin/duplicity"", line 647, in restore_get_patched_rop_iter backup_chain = col_stats.get_backup_chain_at_time(time) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 947, in get_backup_chain_at_time raise CollectionsError(""No backup chains found"") CollectionsError: No backup chains found When running collection-status, it also shows a not very clear error: $ duplicity collection-status --s3-use-new-style -v9 --num-retries 5 's3+http:///' Using archive dir: /home/rata/.cache/duplicity/ca450b9de1f3f3afff546aee88a75d8c Using backup name: ca450b9de1f3f3afff546aee88a75d8c Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Main action: collection-status ================================================================================ duplicity 0.6.18 (February 29, 2012) Args: /usr/bin/duplicity collection-status --s3-use-new-style -v9 --num- retries 5 s3+http:/// Linux lindsay 3.2.0-2-amd64 #1 SMP Sun Apr 15 16:47:38 UTC 2012 x86_64 /usr/bin/python 2.7.2+ (default, Nov 30 2011, 19:22:03) [GCC 4.6.2] ================================================================================ Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: /home/rata/.cache/duplicity/ca450b9de1f3f3afff546aee88a75d8c Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: /home/rata/.cache/duplicity/ca450b9de1f3f3afff546aee88a75d8c Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Using temporary directory /tmp/duplicity-zHyz8c-tempdir It really took me a while to realize what was happening here. I realized when trying to use this PC (with clock not in sync) to backup a new file and test a basic backup on Amazon S3 of an unimportant file that it trow this error (with verbose): $ duplicity --s3-use-new-style -v9 --num-retries 5 bkp-dir/ 's3+http:///test1' (quoted the relevant part only, the output its really big. I can attach it if you need it) AsyncScheduler: running task synchronously (asynchronicity disabled) Failed to create bucket (attempt #1) '' failed (reason: S3ResponseError: S3ResponseError: 403 Forbidden RequestTimeTooSkewedThe difference between the request time and the current time is too large.900000BA793C6067E73366GXaz1ADuulZqpYxrS23487mnGxtfzr2Q0vsKje8LKJlCnWnQJuZcfyp95yZR2Vq2Wed, 23 May 2012 16:13:56 GMT2012-05-23T19:13:29Z) Also, I can consistently reproduce this: if my clock is not in sync, then this error is shown, if I put it in sync, it works okay. Btw, to change the clock I used: ""date -s ; hwclock -s"". And to get back in sync: ""ntpdate-debian"" (that is a debian specific command, but ubuntu I think it has it and running ntpdate with appropriate parameters will do the trick too) It would be really nice if the error shown when trying to restore a backup is nicer and properly tells whats is wrong. It really took me a while to understand the cause =) Perhaps even showing this very same (not nice, but clear) error that is shown when you try to create a backup. But really the trace and ""No backup chains found"" doesn't say anything to me :S Please let me know if you need more information or I can help you to test some patch Thanks a lot, Rodrigo ```",10 118018851,2012-05-22 22:31:50.493,Support for non-default S3 regions (lp:#1003159),"[Original report](https://bugs.launchpad.net/bugs/1003159) created by **Luke (lnbeamer)** ``` Support for Amazon's S3 eastern data centers would be much appreciated! The connections are already supported in Boto: http://docs.pythonboto.org/en/latest/ref/s3.html#module-boto.s3.connection Please expose these S3 features of Boto so that those of us located in non- US/EU locations can take advantage of the better bandwidth of using buckets that are physically closer. For example, an option like --s3-region with the following options: ap-northeast-1 ap-southeast-1 EU sa-east-1 us-west-1 us-west-2 ```",50 118020116,2012-04-20 15:28:51.431,error: Error -3 while decompressing: invalid stored block lengths (lp:#986238),"[Original report](https://bugs.launchpad.net/bugs/986238) created by **Florian Brucker (mail-florianbrucker)** ``` I just received the following error: ----- Using archive dir: /home/torf/.cache/duplicity/2de918c23d86e0fa0189a54725b21ec2 Using backup name: 2de918c23d86e0fa0189a54725b21ec2 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Main action: inc ================================================================================ duplicity 0.6.13 (April 02, 2011) Args: /usr/bin/duplicity -v9 --full-if-older-than 3M --volsize 100 --log- file /var/backups/torf/duplicity_20120420172108.log --encrypt-key XXXXX --exclude /home/torf/tmp --exclude /home/torf/restore --exclude /home/torf/files --include /home/torf/.thunderbird --exclude /home/torf/.mozilla/firefox/zjtevog8.default/Cache --include /home/torf/.mozilla --include /home/torf/.config --include /home/torf/.liferea_1.6 --include /home/torf/.vim --exclude /home/torf/.*/** /home/torf file:///var/backups/torf Linux erdoes 2.6.38-14-generic #58-Ubuntu SMP Tue Mar 27 18:48:46 UTC 2012 i686 i686 /usr/bin/python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) [GCC 4.5.2] ================================================================================ Using temporary directory /tmp/duplicity-dKBAtj-tempdir Registering (mkstemp) temporary file /tmp/duplicity-dKBAtj-tempdir/mkstemp- yu29om-1 Temp has 172491423744 available, backup will use approx 136314880. Local and Remote metadata are synchronized, no sync needed. 357 files exist on backend 8 files exist in cache Extracting backup chains from list of files: ['duplicity- full.20120111T214136Z.vol98.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol14.difftar.gpg', 'duplicity- full.20120111T214136Z.vol54.difftar.gpg', 'duplicity-full- signatures.20120411T124231Z.sigtar.gpg', 'duplicity- full.20120411T124231Z.vol15.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol17.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol29.difftar.gpg', 'duplicity- full.20120111T214136Z.vol9.difftar.gpg', 'duplicity- full.20120411T124231Z.vol75.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol93.difftar.gpg', 'duplicity- full.20120411T124231Z.vol42.difftar.gpg', 'duplicity- full.20120411T124231Z.vol65.difftar.gpg', 'duplicity- full.20120411T124231Z.vol73.difftar.gpg', 'duplicity- full.20120411T124231Z.vol99.difftar.gpg', 'duplicity- full.20120411T124231Z.vol59.difftar.gpg', 'duplicity- full.20120111T214136Z.vol99.difftar.gpg', 'duplicity- full.20120411T124231Z.vol10.difftar.gpg', 'duplicity- full.20120111T214136Z.vol34.difftar.gpg', 'duplicity- full.20120411T124231Z.vol47.difftar.gpg', 'duplicity- full.20120111T214136Z.vol49.difftar.gpg', 'duplicity- full.20120411T124231Z.vol51.difftar.gpg', 'duplicity- full.20120111T214136Z.vol79.difftar.gpg', 'duplicity- full.20120111T214136Z.vol102.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol1.difftar.gpg', 'duplicity- full.20120111T214136Z.vol101.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol31.difftar.gpg', 'duplicity- full.20120111T214136Z.vol39.difftar.gpg', 'duplicity- full.20120111T214136Z.vol19.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol22.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol42.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol103.difftar.gpg', 'duplicity- full.20120411T124231Z.vol14.difftar.gpg', 'duplicity- full.20120111T214136Z.vol112.difftar.gpg', 'duplicity- full.20120111T214136Z.vol74.difftar.gpg', 'duplicity- full.20120411T124231Z.vol20.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol49.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol97.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol27.difftar.gpg', 'duplicity- full.20120411T124231Z.vol57.difftar.gpg', 'duplicity- full.20120111T214136Z.vol105.difftar.gpg', 'duplicity- full.20120411T124231Z.vol101.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol8.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol72.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol80.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol46.difftar.gpg', 'duplicity- full.20120111T214136Z.vol91.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol100.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol39.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol57.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol90.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol99.difftar.gpg', 'duplicity- full.20120411T124231Z.vol34.difftar.gpg', 'duplicity- full.20120111T214136Z.vol23.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol16.difftar.gpg', 'duplicity- full.20120411T124231Z.vol111.difftar.gpg', 'duplicity- full.20120411T124231Z.vol60.difftar.gpg', 'duplicity- full.20120411T124231Z.vol110.difftar.gpg', 'duplicity- full.20120411T124231Z.vol92.difftar.gpg', 'duplicity- full.20120111T214136Z.vol107.difftar.gpg', 'duplicity- full.20120411T124231Z.vol55.difftar.gpg', 'duplicity- full.20120411T124231Z.vol82.difftar.gpg', 'duplicity- full.20120111T214136Z.vol32.difftar.gpg', 'duplicity- full.20120111T214136Z.vol111.difftar.gpg', 'duplicity- full.20120111T214136Z.vol15.difftar.gpg', 'duplicity- full.20120411T124231Z.vol104.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol78.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol83.difftar.gpg', 'duplicity- inc.20120214T191617Z.to.20120403T181622Z.vol1.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol52.difftar.gpg', 'duplicity- full.20120411T124231Z.vol29.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol69.difftar.gpg', 'duplicity- full.20120411T124231Z.vol37.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol62.difftar.gpg', 'duplicity_20120411144230.log', 'duplicity- full.20120111T214136Z.vol48.difftar.gpg', 'duplicity- full.20120111T214136Z.vol31.difftar.gpg', 'duplicity- inc.20120214T191617Z.to.20120403T181622Z.manifest.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol2.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol73.difftar.gpg', 'duplicity- full.20120411T124231Z.vol13.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol3.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol96.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol28.difftar.gpg', 'duplicity- full.20120411T124231Z.vol72.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol71.difftar.gpg', 'duplicity- full.20120411T124231Z.vol103.difftar.gpg', 'duplicity- full.20120411T124231Z.vol18.difftar.gpg', 'duplicity- full.20120111T214136Z.vol104.difftar.gpg', 'duplicity- full.20120411T124231Z.vol5.difftar.gpg', 'duplicity- full.20120411T124231Z.vol22.difftar.gpg', 'duplicity- full.20120411T124231Z.vol74.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol9.difftar.gpg', 'duplicity- full.20120111T214136Z.vol109.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol53.difftar.gpg', 'duplicity- full.20120111T214136Z.vol37.difftar.gpg', 'duplicity- full.20120111T214136Z.vol11.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol66.difftar.gpg', 'duplicity- full.20120411T124231Z.vol27.difftar.gpg', 'duplicity- full.20120411T124231Z.vol64.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol23.difftar.gpg', 'duplicity- full.20120411T124231Z.vol23.difftar.gpg', 'duplicity- full.20120111T214136Z.vol56.difftar.gpg', 'duplicity- full.20120411T124231Z.vol88.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol91.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol92.difftar.gpg', 'duplicity- full.20120411T124231Z.vol86.difftar.gpg', 'duplicity- full.20120411T124231Z.vol112.difftar.gpg', 'duplicity- full.20120111T214136Z.vol38.difftar.gpg', 'duplicity- full.20120411T124231Z.vol49.difftar.gpg', 'duplicity- full.20120411T124231Z.vol4.difftar.gpg', 'duplicity- full.20120411T124231Z.vol62.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol26.difftar.gpg', 'duplicity_20120420171459.log', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol58.difftar.gpg', 'duplicity- full.20120411T124231Z.vol44.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol25.difftar.gpg', 'duplicity- full.20120411T124231Z.vol98.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol5.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol60.difftar.gpg', 'duplicity- full.20120111T214136Z.vol43.difftar.gpg', 'duplicity- full.20120411T124231Z.vol48.difftar.gpg', 'duplicity- full.20120111T214136Z.vol90.difftar.gpg', 'duplicity- full.20120111T214136Z.vol13.difftar.gpg', 'duplicity- full.20120111T214136Z.vol78.difftar.gpg', 'duplicity- full.20120111T214136Z.vol18.difftar.gpg', 'duplicity- full.20120111T214136Z.vol117.difftar.gpg', 'duplicity- full.20120111T214136Z.vol94.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol54.difftar.gpg', 'duplicity- full.20120111T214136Z.vol63.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol102.difftar.gpg', 'duplicity- full.20120111T214136Z.vol114.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol98.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol43.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol56.difftar.gpg', 'duplicity- full.20120111T214136Z.vol27.difftar.gpg', 'duplicity- full.20120111T214136Z.vol92.difftar.gpg', 'duplicity- full.20120111T214136Z.vol52.difftar.gpg', 'duplicity- full.20120111T214136Z.vol113.difftar.gpg', 'duplicity- full.20120111T214136Z.vol82.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol30.difftar.gpg', 'duplicity- full.20120111T214136Z.vol81.difftar.gpg', 'duplicity- full.20120111T214136Z.vol87.difftar.gpg', 'duplicity- full.20120111T214136Z.vol25.difftar.gpg', 'duplicity- full.20120111T214136Z.vol16.difftar.gpg', 'duplicity- full.20120111T214136Z.vol46.difftar.gpg', 'duplicity- full.20120411T124231Z.vol100.difftar.gpg', 'duplicity- full.20120111T214136Z.vol83.difftar.gpg', 'duplicity- full.20120411T124231Z.vol115.difftar.gpg', 'duplicity- full.20120411T124231Z.vol11.difftar.gpg', 'duplicity- full.20120411T124231Z.vol46.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.manifest.gpg', 'duplicity- full.20120111T214136Z.vol106.difftar.gpg', 'duplicity- full.20120111T214136Z.vol24.difftar.gpg', 'duplicity- full.20120411T124231Z.vol53.difftar.gpg', 'duplicity- full.20120111T214136Z.vol22.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol40.difftar.gpg', 'duplicity- full.20120411T124231Z.vol83.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol70.difftar.gpg', 'duplicity- full.20120111T214136Z.vol61.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol50.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol101.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol79.difftar.gpg', 'duplicity- full.20120411T124231Z.vol3.difftar.gpg', 'duplicity- full.20120411T124231Z.vol96.difftar.gpg', 'duplicity- full.20120411T124231Z.vol119.difftar.gpg', 'duplicity- full.20120411T124231Z.vol36.difftar.gpg', 'duplicity- full.20120111T214136Z.vol95.difftar.gpg', 'duplicity- full.20120411T124231Z.vol7.difftar.gpg', 'duplicity- full.20120111T214136Z.vol88.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol64.difftar.gpg', 'duplicity- full.20120411T124231Z.vol117.difftar.gpg', 'duplicity_20120403201622.log', 'duplicity-full.20120411T124231Z.vol54.difftar.gpg', 'duplicity- full.20120411T124231Z.vol32.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol85.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol65.difftar.gpg', 'duplicity- full.20120411T124231Z.vol108.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol10.difftar.gpg', 'duplicity- full.20120411T124231Z.vol41.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol86.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol12.difftar.gpg', 'duplicity- full.20120411T124231Z.vol113.difftar.gpg', 'duplicity- full.20120111T214136Z.vol26.difftar.gpg', 'duplicity- full.20120111T214136Z.vol44.difftar.gpg', 'duplicity- full.20120411T124231Z.vol31.difftar.gpg', 'duplicity- full.20120411T124231Z.vol16.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol13.difftar.gpg', 'duplicity- full.20120111T214136Z.vol42.difftar.gpg', 'duplicity- full.20120111T214136Z.vol4.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol74.difftar.gpg', 'duplicity- full.20120111T214136Z.vol29.difftar.gpg', 'duplicity_20120420172108.log', 'duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol59.difftar.gpg', 'duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol34.difftar.gpg', 'duplicity-full.20120411T124231Z.vol85.difftar.gpg', 'duplicity- full.20120111T214136Z.vol8.difftar.gpg', 'duplicity- full.20120411T124231Z.vol9.difftar.gpg', 'duplicity- full.20120411T124231Z.vol68.difftar.gpg', 'duplicity_20120111224136.log', 'duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol19.difftar.gpg', 'duplicity-full.20120111T214136Z.vol10.difftar.gpg', 'duplicity- full.20120411T124231Z.vol30.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol18.difftar.gpg', 'duplicity- full.20120411T124231Z.vol79.difftar.gpg', 'duplicity- full.20120411T124231Z.vol116.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol32.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol36.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol35.difftar.gpg', 'duplicity- full.20120111T214136Z.vol20.difftar.gpg', 'duplicity_20120420171804.log', 'duplicity-new-signatures.20120214T191617Z.to.20120403T181622Z.sigtar.gpg', 'duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol68.difftar.gpg', 'duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol51.difftar.gpg', 'duplicity-full.20120111T214136Z.vol14.difftar.gpg', 'duplicity- full.20120411T124231Z.vol93.difftar.gpg', 'duplicity- full.20120411T124231Z.vol71.difftar.gpg', 'duplicity- full.20120111T214136Z.vol6.difftar.gpg', 'duplicity- full.20120411T124231Z.vol6.difftar.gpg', 'duplicity- full.20120111T214136Z.vol68.difftar.gpg', 'duplicity- full.20120411T124231Z.vol12.difftar.gpg', 'duplicity- full.20120111T214136Z.vol12.difftar.gpg', 'duplicity- full.20120411T124231Z.vol35.difftar.gpg', 'duplicity- full.20120411T124231Z.vol2.difftar.gpg', 'duplicity- full.20120111T214136Z.vol76.difftar.gpg', 'duplicity- full.20120111T214136Z.vol51.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol55.difftar.gpg', 'duplicity- full.20120111T214136Z.vol115.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol38.difftar.gpg', 'duplicity- full.20120411T124231Z.vol45.difftar.gpg', 'duplicity- full.20120111T214136Z.vol2.difftar.gpg', 'duplicity- full.20120411T124231Z.vol61.difftar.gpg', 'duplicity- full.20120411T124231Z.vol89.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol45.difftar.gpg', 'duplicity- full.20120111T214136Z.vol55.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol4.difftar.gpg', 'duplicity- full.20120411T124231Z.vol67.difftar.gpg', 'duplicity- full.20120111T214136Z.vol84.difftar.gpg', 'duplicity- full.20120111T214136Z.vol3.difftar.gpg', 'duplicity- full.20120411T124231Z.vol21.difftar.gpg', 'duplicity- full.20120411T124231Z.vol17.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol84.difftar.gpg', 'duplicity- full.20120111T214136Z.vol85.difftar.gpg', 'duplicity- full.20120411T124231Z.vol33.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol61.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol77.difftar.gpg', 'duplicity- full.20120411T124231Z.vol24.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol81.difftar.gpg', 'duplicity- full.20120111T214136Z.vol21.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol44.difftar.gpg', 'duplicity- full.20120411T124231Z.vol106.difftar.gpg', 'duplicity- full.20120411T124231Z.vol70.difftar.gpg', 'duplicity- full.20120111T214136Z.vol67.difftar.gpg', 'duplicity- full.20120111T214136Z.vol53.difftar.gpg', 'duplicity- full.20120411T124231Z.vol81.difftar.gpg', 'duplicity-full- signatures.20120111T214136Z.sigtar.gpg', 'duplicity- full.20120411T124231Z.vol107.difftar.gpg', 'duplicity- full.20120411T124231Z.vol80.difftar.gpg', 'duplicity- full.20120411T124231Z.vol118.difftar.gpg', 'duplicity- full.20120411T124231Z.vol19.difftar.gpg', 'duplicity-new- signatures.20120111T214136Z.to.20120214T191617Z.sigtar.gpg', 'duplicity- full.20120111T214136Z.vol5.difftar.gpg', 'duplicity- full.20120111T214136Z.vol59.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol15.difftar.gpg', 'duplicity- full.20120111T214136Z.vol36.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol11.difftar.gpg', 'duplicity- full.20120111T214136Z.vol116.difftar.gpg', 'duplicity_20120420172057.log', 'duplicity-full.20120111T214136Z.vol35.difftar.gpg', 'duplicity- full.20120411T124231Z.vol105.difftar.gpg', 'duplicity- full.20120111T214136Z.vol1.difftar.gpg', 'duplicity- full.20120411T124231Z.vol109.difftar.gpg', 'duplicity- full.20120111T214136Z.vol28.difftar.gpg', 'duplicity- full.20120111T214136Z.vol100.difftar.gpg', 'duplicity- full.20120411T124231Z.vol94.difftar.gpg', 'duplicity- full.20120411T124231Z.vol84.difftar.gpg', 'duplicity- full.20120411T124231Z.vol52.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol63.difftar.gpg', 'duplicity- full.20120411T124231Z.vol77.difftar.gpg', 'duplicity- full.20120111T214136Z.vol110.difftar.gpg', 'duplicity- full.20120111T214136Z.vol58.difftar.gpg', 'duplicity- full.20120111T214136Z.vol70.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol89.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol37.difftar.gpg', 'duplicity- full.20120111T214136Z.vol93.difftar.gpg', 'duplicity- full.20120411T124231Z.vol25.difftar.gpg', 'duplicity- full.20120111T214136Z.vol80.difftar.gpg', 'duplicity- full.20120411T124231Z.vol43.difftar.gpg', 'duplicity- full.20120111T214136Z.vol96.difftar.gpg', 'duplicity_20120214201617.log', 'duplicity-full.20120111T214136Z.vol97.difftar.gpg', 'duplicity- full.20120111T214136Z.vol89.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol24.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol7.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol47.difftar.gpg', 'duplicity- full.20120411T124231Z.vol38.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol41.difftar.gpg', 'duplicity- full.20120411T124231Z.vol39.difftar.gpg', 'duplicity- full.20120411T124231Z.vol120.difftar.gpg', 'duplicity- full.20120111T214136Z.vol75.difftar.gpg', 'duplicity- full.20120111T214136Z.vol47.difftar.gpg', 'duplicity- full.20120411T124231Z.vol56.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol95.difftar.gpg', 'duplicity- full.20120111T214136Z.vol77.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol6.difftar.gpg', 'duplicity- full.20120111T214136Z.vol86.difftar.gpg', 'duplicity- full.20120111T214136Z.vol57.difftar.gpg', 'duplicity- full.20120111T214136Z.vol50.difftar.gpg', 'duplicity- full.20120411T124231Z.vol63.difftar.gpg', 'duplicity- full.20120111T214136Z.vol65.difftar.gpg', 'duplicity- full.20120111T214136Z.vol7.difftar.gpg', 'duplicity- full.20120411T124231Z.manifest.gpg', 'duplicity- full.20120111T214136Z.vol33.difftar.gpg', 'duplicity- full.20120411T124231Z.vol90.difftar.gpg', 'duplicity- full.20120411T124231Z.vol87.difftar.gpg', 'duplicity- full.20120111T214136Z.vol64.difftar.gpg', 'duplicity- full.20120411T124231Z.vol66.difftar.gpg', 'duplicity- full.20120411T124231Z.vol8.difftar.gpg', 'duplicity- full.20120411T124231Z.vol28.difftar.gpg', 'duplicity- full.20120411T124231Z.vol69.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol33.difftar.gpg', 'duplicity- full.20120411T124231Z.vol78.difftar.gpg', 'duplicity- full.20120411T124231Z.vol58.difftar.gpg', 'duplicity- full.20120411T124231Z.vol1.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol67.difftar.gpg', 'duplicity- full.20120111T214136Z.vol17.difftar.gpg', 'duplicity- full.20120111T214136Z.vol66.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol21.difftar.gpg', 'duplicity- full.20120111T214136Z.vol108.difftar.gpg', 'duplicity- full.20120411T124231Z.vol97.difftar.gpg', 'duplicity- full.20120411T124231Z.vol95.difftar.gpg', 'duplicity- full.20120411T124231Z.vol26.difftar.gpg', 'duplicity- full.20120111T214136Z.vol69.difftar.gpg', 'duplicity- full.20120111T214136Z.vol62.difftar.gpg', 'duplicity- full.20120111T214136Z.vol72.difftar.gpg', 'duplicity- full.20120411T124231Z.vol102.difftar.gpg', 'duplicity- full.20120111T214136Z.vol73.difftar.gpg', 'duplicity- full.20120111T214136Z.vol103.difftar.gpg', 'duplicity- full.20120111T214136Z.vol41.difftar.gpg', 'duplicity- full.20120411T124231Z.vol40.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol82.difftar.gpg', 'duplicity- full.20120111T214136Z.vol40.difftar.gpg', 'duplicity- full.20120411T124231Z.vol91.difftar.gpg', 'duplicity- full.20120111T214136Z.manifest.gpg', 'duplicity- full.20120411T124231Z.vol76.difftar.gpg', 'duplicity- full.20120111T214136Z.vol60.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol88.difftar.gpg', 'duplicity- full.20120111T214136Z.vol45.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol94.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol87.difftar.gpg', 'duplicity- full.20120111T214136Z.vol71.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol20.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol48.difftar.gpg', 'duplicity- full.20120111T214136Z.vol30.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol75.difftar.gpg', 'duplicity- inc.20120111T214136Z.to.20120214T191617Z.vol76.difftar.gpg', 'duplicity- full.20120411T124231Z.vol114.difftar.gpg', 'duplicity- full.20120411T124231Z.vol50.difftar.gpg'] File duplicity-full.20120111T214136Z.vol98.difftar.gpg is not part of a known set; creating new set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol14.difftar.gpg is not part of a known set; creating new set File duplicity-full.20120111T214136Z.vol54.difftar.gpg is part of known set File duplicity-full-signatures.20120411T124231Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20120411T124231Z.sigtar.gpg' File duplicity-full.20120411T124231Z.vol15.difftar.gpg is not part of a known set; creating new set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol17.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol29.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol9.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol75.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol93.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol42.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol65.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol73.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol99.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol59.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol99.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol10.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol34.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol47.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol49.difftar.gpg is part of known set File duplicity-full.20120411T124231Z.vol51.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol79.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol102.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol1.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol101.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol31.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol39.difftar.gpg is part of known set File duplicity-full.20120111T214136Z.vol19.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol22.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol42.difftar.gpg is part of known set File duplicity-inc.20120111T214136Z.to.20120214T191617Z.vol103.difftar.gpg is part of known set ... Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 20.jpg Comparing ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 20.jpg') and ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 20.jpg') Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 21.jpg Comparing ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 21.jpg') and ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 21.jpg') Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 22.jpg Comparing ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 22.jpg') and ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 22.jpg') Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 23.jpg Comparing ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 23.jpg') and ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 23.jpg') Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 24.jpg Comparing ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 24.jpg') and ('docs', 'photos', '2005', '2005 - Konstanz im Nebel', '2005 - Konstanz im Nebel - 24.jpg') Selecting /home/torf/docs/photos/2005/2005 - Konstanz im Nebel/2005 - Konstanz im Nebel - 25.jpg Removing still remembered temporary file /tmp/duplicity-dKBAtj- tempdir/mktemp-PwMbhN-2 Removing still remembered temporary file /tmp/duplicity-dKBAtj- tempdir/mkstemp-yu29om-1 Removing still remembered temporary file /tmp/duplicity-dKBAtj- tempdir/mktemp-kBoce9-3 Removing still remembered temporary file /home/torf/.cache/duplicity/2de918c23d86e0fa0189a54725b21ec2/duplicity-1cdzAp- tempdir/mktemp-pU_o3W-1 Removing still remembered temporary file /home/torf/.cache/duplicity/2de918c23d86e0fa0189a54725b21ec2/duplicity- emVNgX-tempdir/mktemp-bJyoep-1 ----- Here's the output to STDERR: ----- Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1265, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1258, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1240, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 488, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 279, in GPGWriteFile data = block_iter.next(min(block_size, bytes_to_go)).data File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 505, in next result = self.process(self.input_iter.next(), size) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 187, in get_delta_iter for new_path, sig_path in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 275, in collate2iters relem2 = riter2.next() File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 344, in combine_path_iters refresh_triple_list(triple_list) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 330, in refresh_triple_list new_triple = get_triple(old_triple[1]) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 316, in get_triple path = path_iter_list[iter_index].next() File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 229, in sigtar2path_iter for tarinfo in tf: File ""/usr/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 1211, in next tarinfo = self.tarfile.next() File ""/usr/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 541, in next self.throwaway_until(self.next_chunk) File ""/usr/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 521, in throwaway_until self.fileobj.read(bufsize) File ""/usr/lib/python2.7/gzip.py"", line 252, in read self._read(readsize) File ""/usr/lib/python2.7/gzip.py"", line 303, in _read uncompress = self.decompress.decompress(buf) error: Error -3 while decompressing: invalid stored block lengths ----- I'm running Ubuntu 11.04, Python 2.7.1+ and duplicity 0.6.13. The target file system is an ext4. The last backup completed without problems. Please tell me if you need more information to investigate this. Thanks for any help! ```",10 118020111,2012-04-16 09:04:56.073,Verify should only operate on changed files in incremental backup (lp:#982902),"[Original report](https://bugs.launchpad.net/bugs/982902) created by **Padfoot (padfoot)** ``` This is more of a wishlist request to change or extend the verify function. Currently, verify will check every file (as if I have done a full backup) rather than just the newly backed up files in an incremental backup. To me, this seems a waste of processor resources, as presumably, the unchanged files would have been verified on the initial full backup (if users are following a recommended backup regime). Doing an incremental backup is very fast indeed, yet when I verify afterwards, it takes a very long time as it is not only verifying the increnental changes, but all the already verified files. May I request the verify option to check the entire backup for a full backup and only the incremental file changes in an incremental backup? Cheers. ```",12 118019448,2010-04-25 16:17:44.442,Problem backing up to S3 (lp:#569826),"[Original report](https://bugs.launchpad.net/bugs/569826) created by **pf (poon-fung)** ``` I tried to backup to S3 and got an error. I am running duplicity on Ubuntu 8.0.4LTS. The same command backs up to local file system worked just fine: duplicity full --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY --volsize=1024 \ --archive-dir=/duplicity/archive --tempdir=/duplicity/tmp \ --include=/data1 --include=/data2 --exclude=/ / file:///duplicity But an error occuried when backing up to S3. The S3 bucket specified already exists and is empty. duplicity full --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY --volsize=1024 \ --archive-dir=/duplicity/archive --tempdir=/duplicity/tmp \ --include=/data1 --include=/data2 --exclude=/ / s3+http://mybucket- test-backup Here is the console output: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 828, in with_tempdir(main) File ""/usr/bin/duplicity"", line 821, in with_tempdir fn() File ""/usr/bin/duplicity"", line 795, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 336, in full_backup bytes_written = write_multivol(""full"", tarblock_iter, globals.backend) File ""/usr/bin/duplicity"", line 205, in write_multivol mf = manifest.Manifest().set_dirinfo() File ""/usr/lib/python2.5/site-packages/duplicity/manifest.py"", line 54, in set_dirinfo self.local_dirname = globals.local_path.name AttributeError: 'NoneType' object has no attribute 'name' Main action: inc /usr/bin/python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] duplicity 0.5.16 (April 21, 2009) Linux test1.testdomain.com 2.6.24-19-server #1 SMP Wed Aug 20 23:54:28 UTC 2008 i686 Using temporary directory /duplicity/tmp/duplicity-zMjQZU-tempdir Registering (mkstemp) temporary file /duplicity/tmp/duplicity-zMjQZU- tempdir/mkstemp-h-S85K-1 Temp has 22722093056 available, backup will use approx 2469606195. 0 files exist on backend Extracting backup chains from list of files: [] Collection Status ----------------- Connecting with backend: BotoBackend Archive dir: (() /duplicity/archive dir) Found 0 backup chains without signatures. No backup chains with active signatures found No orphaned or incomplete backup sets found. Last full backup date: none Registering (mktemp) temporary file /duplicity/tmp/duplicity-zMjQZU- tempdir/mktemp-27JOP7-2 Using temporary directory /duplicity/archive/duplicity-HjOA7b-tempdir Registering (mktemp) temporary file /duplicity/archive/duplicity- HjOA7b-tempdir/mktemp-e0tL_C-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /duplicity/tmp/duplicity-zMjQZU- tempdir/mktemp-FSGBBU-3 Selecting / Comparing () and None Getting delta of (() / dir) and None Generating delta - new file: . Error accessing possibly locked file /dbdata Selecting /duplicity Comparing ('duplicity',) and None Getting delta of (('duplicity',) /duplicity dir) and None Generating delta - new file: duplicity ```",6 118018679,2010-04-08 12:12:19.509,Duplicity doesn't save ACLs/xattrs (lp:#558385),"[Original report](https://bugs.launchpad.net/bugs/558385) created by **Woland (wolandtel)** ``` Have an ext3 filesystem mounted with -o acl,user_xattr and directory on it with some files having acls and extended attributes. duplicity --no-encryption dir file://bk duplicity --no-encryption file://bk res Now files in res have no ACLs nor xattrs. duplicity 0.6.08b (from distro and from sources) Python 2.5.5 Debian testing GNU/Linux aiur:/mnt/test# duplicity -v9 --no-encryption alfa/ file://bk Using archive dir: /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e Using backup name: 31f2abd7613ded96f788766f0cbee63e Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.botobackend Succeeded Main action: inc ================================================================================ duplicity 0.6.08b (March 11, 2010) Args: /usr/bin/duplicity -v9 --no-encryption alfa/ file://bk Linux aiur 2.6.32.7-aiur #3 SMP Wed Feb 10 20:35:13 YEKT 2010 i686 /usr/bin/python 2.5.5 (r255:77872, Feb 1 2010, 19:53:42) [GCC 4.4.3] ================================================================================ Using temporary directory /tmp/duplicity-a5WVEO-tempdir Registering (mkstemp) temporary file /tmp/duplicity-a5WVEO-tempdir/mkstemp- AMb2kK-1 Temp has 249462784 available, backup will use approx 34078720. Synchronizing remote metadata to local cache... Deleting local /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity-full- signatures.20100408T115330Z.sigtar.gz (not authoritative at backend). Deleting local /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity- full.20100408T115330Z.manifest (not authoritative at backend). 0 files exist on backend 0 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity-AWuhUl- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity-AWuhUl- tempdir/mktemp-P4y5eN-1 Using temporary directory /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity- ZR3L5J-tempdir Registering (mktemp) temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity- ZR3L5J-tempdir/mktemp-OP8eXA-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-a5WVEO-tempdir/mktemp- ArJkgN-2 Selecting alfa Comparing () and None Getting delta of (() alfa dir) and None A . Selecting alfa/file Comparing ('file',) and None Getting delta of (('file',) alfa/file reg) and None A file Selecting alfa/test Comparing ('test',) and None Getting delta of (('test',) alfa/test reg) and None A test Selecting alfa/ Comparing ('\xed\xf4',) and None Getting delta of (('\xed\xf4',) alfa/reg) and None A Selecting alfa/ Comparing ('\xf2\xe5\xf1\xf2',) and None Getting delta of (('\xf2\xe5\xf1\xf2',) alfa/ reg) and None A Removing still remembered temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity-AWuhUl- tempdir/mktemp-P4y5eN-1 Cleanup of temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity-AWuhUl- tempdir/mktemp-P4y5eN-1 failed Removing still remembered temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity- ZR3L5J-tempdir/mktemp-OP8eXA-1 Cleanup of temporary file /root/.cache/duplicity/31f2abd7613ded96f788766f0cbee63e/duplicity- ZR3L5J-tempdir/mktemp-OP8eXA-1 failed AsyncScheduler: running task synchronously (asynchronicity disabled) Writing bk/duplicity-full.20100408T120817Z.vol1.difftar.gz Deleting /tmp/duplicity-a5WVEO-tempdir/mktemp-ArJkgN-2 Forgetting temporary file /tmp/duplicity-a5WVEO-tempdir/mktemp-ArJkgN-2 AsyncScheduler: task completed successfully Processed volume 1 Writing bk/duplicity-full-signatures.20100408T120817Z.sigtar.gz Writing bk/duplicity-full.20100408T120817Z.manifest 3 files exist on backend 2 files exist in cache Extracting backup chains from list of files: ['duplicity- full.20100408T120817Z.manifest', 'duplicity- full.20100408T120817Z.vol1.difftar.gz', 'duplicity-full- signatures.20100408T120817Z.sigtar.gz'] File duplicity-full.20100408T120817Z.manifest is not part of a known set; creating new set File duplicity-full.20100408T120817Z.vol1.difftar.gz is part of known set File duplicity-full-signatures.20100408T120817Z.sigtar.gz is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20100408T120817Z.sigtar.gz' Found backup chain [Thu Apr 8 18:08:17 2010]-[Thu Apr 8 18:08:17 2010] --------------[ Backup Statistics ]-------------- StartTime 1270728497.40 (Thu Apr 8 18:08:17 2010) EndTime 1270728497.42 (Thu Apr 8 18:08:17 2010) ElapsedTime 0.01 (0.01 seconds) SourceFiles 5 SourceFileSize 1024 (1.00 KB) NewFiles 5 NewFileSize 1024 (1.00 KB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 5 RawDeltaSize 0 (0 bytes) TotalDestinationSizeChange 220 (220 bytes) Errors 0 ------------------------------------------------- Removing still remembered temporary file /tmp/duplicity-a5WVEO- tempdir/mkstemp-AMb2kK-1 ```",136 118019443,2010-03-28 17:23:24.403,Should detect unencypted backups even if --no-encryption omitted (lp:#550368),"[Original report](https://bugs.launchpad.net/bugs/550368) created by **Olivier Berger (olivierberger)** ``` When a backup was made with --no-encryption, subsequent calls to collection-status should auto-detect that it was not ecnrypted. At the moment, issueing duplicity collection-status file:///whatever/ will report an error like : IOError: [Errno 2] No such file or directory: '/whatever/duplicity-full- signatures.20100306T090815Z.sigtar.gpg' It could detect that there's indeed a duplicity-full- signatures.20100306T090815Z.sigtar.gz instead of the .gpg file. Hope this helps. Best regards, ```",28 118019439,2010-03-24 08:40:46.293,"crash on cleanup, local archive files not found remote (lp:#545823)","[Original report](https://bugs.launchpad.net/bugs/545823) created by **Jelle de Jong (jelledejong)** ``` ------------------------------------------------------------------------ http://lists.gnu.org/archive/html/duplicity-talk/2010-03/msg00099.html ------------------------------------------------------------------------ host02:~# duplicity --version duplicity 0.6.08b ------------------------------------------------------------------------ host02:~# duplicity cleanup --extra-clean --force scp://host02-backup@host01.example.com/backup/vmail GnuPG passphrase: Local and Remote metadata are synchronized, no sync needed. Warning, found the following remote orphaned signature files: duplicity-new-signatures.20100203T223035Z.to.20100204T223041Z.sigtar.gpg duplicity-new-signatures.20100204T223041Z.to.20100205T223035Z.sigtar.gpg duplicity-new-signatures.20100205T223035Z.to.20100206T223034Z.sigtar.gpg duplicity-new-signatures.20100206T223034Z.to.20100207T223036Z.sigtar.gpg duplicity-new-signatures.20100207T223036Z.to.20100208T223035Z.sigtar.gpg duplicity-new-signatures.20100208T223035Z.to.20100209T223042Z.sigtar.gpg duplicity-new-signatures.20100209T223042Z.to.20100210T223033Z.sigtar.gpg duplicity-new-signatures.20100210T223033Z.to.20100211T223040Z.sigtar.gpg duplicity-new-signatures.20100211T223040Z.to.20100212T223033Z.sigtar.gpg duplicity-new-signatures.20100212T223033Z.to.20100213T223037Z.sigtar.gpg duplicity-new-signatures.20100213T223037Z.to.20100214T223033Z.sigtar.gpg duplicity-new-signatures.20100214T223033Z.to.20100215T223037Z.sigtar.gpg duplicity-new-signatures.20100215T223037Z.to.20100216T223037Z.sigtar.gpg duplicity-new-signatures.20100216T223037Z.to.20100217T223037Z.sigtar.gpg duplicity-new-signatures.20100217T223037Z.to.20100218T223039Z.sigtar.gpg duplicity-new-signatures.20100218T223039Z.to.20100219T230500Z.sigtar.gpg duplicity-new-signatures.20100219T230500Z.to.20100311T171314Z.sigtar.gpg duplicity-new-signatures.20100311T171314Z.to.20100311T172255Z.sigtar.gpg Warning, found the following orphaned backup file: [duplicity-inc.20100311T173524Z.to.20100311T223101Z.manifest.part] Last full backup date: Fri Mar 12 23:30:52 2010 Deleting these files from backend: duplicity-full-signatures.20100202T094033Z.sigtar.gz duplicity-new-signatures.20100202T094033Z.to.20100203T223035Z.sigtar.part duplicity-new-signatures.20100203T223035Z.to.20100204T223041Z.sigtar.gz duplicity-new-signatures.20100204T223041Z.to.20100205T223035Z.sigtar.gz duplicity-new-signatures.20100205T223035Z.to.20100206T223034Z.sigtar.gz duplicity-new-signatures.20100206T223034Z.to.20100207T223036Z.sigtar.gz duplicity-new-signatures.20100207T223036Z.to.20100208T223035Z.sigtar.gz duplicity-new-signatures.20100208T223035Z.to.20100209T223042Z.sigtar.gz duplicity-new-signatures.20100209T223042Z.to.20100210T223033Z.sigtar.gz duplicity-new-signatures.20100210T223033Z.to.20100211T223040Z.sigtar.gz duplicity-new-signatures.20100211T223040Z.to.20100212T223033Z.sigtar.gz duplicity-new-signatures.20100212T223033Z.to.20100213T223037Z.sigtar.gz duplicity-new-signatures.20100213T223037Z.to.20100214T223033Z.sigtar.gz duplicity-new-signatures.20100214T223033Z.to.20100215T223037Z.sigtar.gz duplicity-new-signatures.20100215T223037Z.to.20100216T223037Z.sigtar.gz duplicity-new-signatures.20100216T223037Z.to.20100217T223037Z.sigtar.gz duplicity-new-signatures.20100217T223037Z.to.20100218T223039Z.sigtar.gz duplicity-new-signatures.20100218T223039Z.to.20100219T230500Z.sigtar.gz duplicity-new-signatures.20100219T230500Z.to.20100311T171314Z.sigtar.gz duplicity-new-signatures.20100311T171314Z.to.20100311T172255Z.sigtar.gz duplicity-new-signatures.20100311T172255Z.to.20100311T173524Z.sigtar.part duplicity-new-signatures.20100311T173524Z.to.20100311T223101Z.sigtar.part duplicity-inc.20100311T173524Z.to.20100311T223101Z.manifest.part duplicity-full-signatures.20100202T094033Z.sigtar.gpg duplicity-new-signatures.20100203T223035Z.to.20100204T223041Z.sigtar.gpg duplicity-new-signatures.20100204T223041Z.to.20100205T223035Z.sigtar.gpg duplicity-new-signatures.20100205T223035Z.to.20100206T223034Z.sigtar.gpg duplicity-new-signatures.20100206T223034Z.to.20100207T223036Z.sigtar.gpg duplicity-new-signatures.20100207T223036Z.to.20100208T223035Z.sigtar.gpg duplicity-new-signatures.20100208T223035Z.to.20100209T223042Z.sigtar.gpg duplicity-new-signatures.20100209T223042Z.to.20100210T223033Z.sigtar.gpg duplicity-new-signatures.20100210T223033Z.to.20100211T223040Z.sigtar.gpg duplicity-new-signatures.20100211T223040Z.to.20100212T223033Z.sigtar.gpg duplicity-new-signatures.20100212T223033Z.to.20100213T223037Z.sigtar.gpg duplicity-new-signatures.20100213T223037Z.to.20100214T223033Z.sigtar.gpg duplicity-new-signatures.20100214T223033Z.to.20100215T223037Z.sigtar.gpg duplicity-new-signatures.20100215T223037Z.to.20100216T223037Z.sigtar.gpg duplicity-new-signatures.20100216T223037Z.to.20100217T223037Z.sigtar.gpg duplicity-new-signatures.20100217T223037Z.to.20100218T223039Z.sigtar.gpg duplicity-new-signatures.20100218T223039Z.to.20100219T230500Z.sigtar.gpg duplicity-new-signatures.20100219T230500Z.to.20100311T171314Z.sigtar.gpg duplicity-new-signatures.20100311T171314Z.to.20100311T172255Z.sigtar.gpg Remote file or directory does not exist in command='rm ""duplicity- inc.20100311T173524Z.to.20100311T223101Z.manifest.part""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' failed (attempt #1) Remote file or directory does not exist in command='rm ""duplicity- inc.20100311T173524Z.to.20100311T223101Z.manifest.part""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' failed (attempt #2) Remote file or directory does not exist in command='rm ""duplicity- inc.20100311T173524Z.to.20100311T223101Z.manifest.part""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' failed (attempt #3) Remote file or directory does not exist in command='rm ""duplicity- inc.20100311T173524Z.to.20100311T223101Z.manifest.part""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' failed (attempt #4) Remote file or directory does not exist in command='rm ""duplicity- inc.20100311T173524Z.to.20100311T223101Z.manifest.part""' Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' failed (attempt #5) Giving up trying to execute 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' after 5 attempts BackendException: Error running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=1 host02-backup@host01.example.com' ------------------------------------------------------------------------ See file attached to this bug report. ------------------------------------------------------------------------ ```",12 118019260,2010-03-21 08:00:29.258,Should not auto-create missing target dirs (lp:#543226),"[Original report](https://bugs.launchpad.net/bugs/543226) created by **Olivier Berger (olivierberger)** ``` Hi. It seems that duplicity desn't check and report missing destination dir, but instead auto-creates it. That may seem convenient to users, but can lead to dangerous situations. Imagine that the (local) target dir is file:///mnt/sda1/backups for instance, where /mnt/sda1 is a mounted volume. If for whatever reason, the volume hasn't been mounted, then next duplicity run will start writing inside the mount point directory. It is quite common, I suppose, to use external disks of big capacity to relieve the main system's disks for backups... but then it would start filling-up the main system. I think a check for existance of the target dir would be the first thing to do at the begining of the backup. Thanks in advance. ```",8 118022558,2010-03-04 22:22:35.712,Setting --tmpdir on Synology fails (lp:#532244),"[Original report](https://bugs.launchpad.net/bugs/532244) created by **hasko (hasko)** ``` Installed duplicity 0.6.05 on a Synology disk station. Using --tmpdir option and alternatively TMPDIR environment var renders the following messages: GnuPG passphrase: Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. Retype passphrase to confirm: Cleanup of temporary directory /volume1/public/tmp/duplicity-AfRpy0-tempdir failed - this is probably a bug. Traceback (most recent call last): File ""/opt/bin/duplicity-2.6"", line 1241, in with_tempdir(main) File ""/opt/bin/duplicity-2.6"", line 1234, in with_tempdir fn() File ""/opt/bin/duplicity-2.6"", line 1212, in main full_backup(col_stats) File ""/opt/bin/duplicity-2.6"", line 417, in full_backup globals.backend) File ""/opt/bin/duplicity-2.6"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 278, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/opt/lib/python2.6/site-packages/duplicity/gpg.py"", line 270, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/volume1/public/tmp/duplicity-AfRpy0-tempdir/mktemp-ZlwWIw-2' Is this me being stupid? The Synology acting up weird? A bug?? Python 2.6, OS: Synology DSM 2.2-0959, target file system Amazon S3. I'll happily send the longer log if necessary, i.e. if it's really an obscure bug. ```",28 118018646,2010-01-10 03:43:09.944,Allow testing an exclude file (output list of files it matches) (lp:#505366),"[Original report](https://bugs.launchpad.net/bugs/505366) created by **Daniel Hahler (blueyed)** ``` I'd like to have an interface/command to list the files that matches a given include/exclude pattern/file. USE CASE: I'm facing a problem, where duplicity (0.5.06, Ubuntu Hardy) backs up far too many files, and it appears to be related to my refactoring of excludes in the exclude file, from:   + /var/lib/vz/private/111/var/lib/mysql/sqldump   + /var/lib/vz/private/142/var/lib/mysql/sqldump   /var/lib/vz/private/111/var/lib/mysql/*/   /var/lib/vz/private/142/var/lib/mysql/*/ to   + **/var/lib/mysql/sqldump   **/var/lib/mysql/*/*   (and adding)   **/var/lib/mysql/ibdata1   **/var/lib/mysql/ib_logfile* It would be very helpful to easily test those patterns to track this down. ```",20 118019437,2010-01-07 19:17:18.329,"url passwords are not escaped, duplicity crashes (lp:#504417)","[Original report](https://bugs.launchpad.net/bugs/504417) created by **edso (ed.so)** ``` this probably affects all data given in the url string! text attached from original message: http://lists.gnu.org/archive/html/duplicity-talk/2009-12/msg00028.html I forward this because he is totally right and this should also be fixed upstream. What duply does from the next release (which is from today's xmas release ;) is to url encode separately given user/password params and hinting the user in the conf file that special chars have to be url encoded in the parts username, password, path of the url. As duplicity can't possibly know if a user already url encoded these, it can't really fix it. But it should still try to guide the user around such errors. a) I guess some note about the necessity url encoding in the error message would help a lot here. b) Additionally the whole url could be scanned for chars that are not A-Za-z0-9._-~:/ . As duplicity backends do not have querystrings I guess any other character in the url means that the url is not valid. ... merry days, ede -------- Original Message -------- Bugs item #2920707, was opened at 2009-12-24 14:28 Message generated for change (Tracker Item Submitted) made by nobody You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1041147&aid=2920707&group_id=217745 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: ftp passwords are not escaped, duplicity crashes Initial Comment: When an ftp password contains special characters (in my case the ? and = letters), they must be encoded as %3F and %3D. Currently, neither duply not duplicity do the escaping, and neither of them tell the user to do it manually. If they are not encoded properly, duplicity will crash with a rather meaningless error message. For ftplictiy V 0.5.18 it's: User error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 825, in with_tempdir(main) File ""/usr/bin/duplicity"", line 818, in with_tempdir fn() File ""/usr/bin/duplicity"", line 747, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.6/dist-packages/duplicity/commandline.py"", line 593, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/lib/python2.6/dist-packages/duplicity/commandline.py"", line 488, in set_backend backend1, backend2 = backend.get_backend(arg1), backend.get_backend(arg2) File ""/usr/lib/python2.6/dist-packages/duplicity/backend.py"", line 84, in get_backend pu = ParsedUrl(url_string) File ""/usr/lib/python2.6/dist-packages/duplicity/backend.py"", line 191, in __init__ raise InvalidBackendURL(""Syntax error (port) in: %s"" % url_string) InvalidBackendURL: Syntax error (port) in: ftp://username123:address@hidden In duplicity 0.5.09 the error message is different, more useless, but about the same type of problem. I see three possible solutions: * duply should encode the password properly * duplicity should encode the password properly * both duply and duplicity should state more clearly that they need pre- encoded passwords, so that the user knows he must do it himself ```",10 118019435,2010-01-07 19:12:20.822,unbalanced parenthesis in password breaks duplicity (lp:#504413),"[Original report](https://bugs.launchpad.net/bugs/504413) created by **edso (ed.so)** ``` text from: http://lists.gnu.org/archive/html/duplicity-talk/2009-10/msg00050.html In connection to a bug in duply i stumbled over this one in duplicity ... I used a password containing an unclosed parenthesis eg. 'test)foo' ... I also spend some time to figure out a fix. It could be the replacement of line 359 in backend.py (version 0.6.05) .. here a modified version of the function. The commented line is the original. The replacement above splits the url in protocol, credentials, rest and creates 'prot://[user:address@hidden' string which should suffice for obfuscation . The documentation has to be adapted of course. ...ede --FIX--> def munge_password(self, commandline): [SNIP] if self.parsed_url.password: return re.sub(r'^([^:/]+)://(([^:/@]*):?([^:/@]*))@?(.*)$', r'\1://\3:address@hidden', commandline) #return re.sub(self.parsed_url.password, '', commandline) else: return commandline ---ERROR--> Traceback (most recent call last): File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/duplicity"", line 1242, in with_tempdir(main) File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/duplicity"", line 1235, in with_tempdir fn() File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/duplicity"", line 1136, in main sync_archive() File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/duplicity"", line 915, in sync_archive remlist = globals.backend.list() File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/duplicity/backends/ftpbackend.py"", line 109, in list l = self.popen_persist(commandline).split(' ') File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/../duplicity/backend.py"", line 416, in popen_persist private = self.munge_password(commandline) File ""/srv/www/vhosts/jamoke.net/_apps/duplicity-0.6.05/bin/../duplicity/backend.py"", line 360, in munge_password return re.sub(self.parsed_url.password, '', commandline) File ""/usr/lib/python2.6/re.py"", line 150, in sub return _compile(pattern, 0).sub(repl, string, count) File ""/usr/lib/python2.6/re.py"", line 243, in _compile raise error, v # invalid expression error: unbalanced parenthesis ```",10 118022276,2010-01-03 14:30:08.456,Unknown error while uploading duplicity-full-signatures (lp:#502609),"[Original report](https://bugs.launchpad.net/bugs/502609) created by **Olivier Berger (olivierberger)** ``` Hi, I'am using Déjà dup since few days. I found it really convenient BTW :) So, i was uploading my files since days and it finally reach 100% this morning. It was now uploading a duplicity-full-signatures file and i had to stop at 1,3G to continue later on. Now when i restart Déjà-Dup i have an 'unknown error'. I suppose it's a BUG.. I attached the log and config file Here my version : deja-dup 11.1-0ubuntu0karmic1 duplicity 0.6.06-0ubuntu0karmic1 Description: Ubuntu 9.10 thanks indeed ! ```",26 118019433,2009-12-28 01:24:19.258,IMAP Backend Error in _prepareBody (with python 2.4) (lp:#500902),"[Original report](https://bugs.launchpad.net/bugs/500902) created by **Mickey Knox (mickey-knox)** ``` Hi, on an older system i have the following error. duplicity 0.6.06 python 2.4.4 Args: /usr/bin/duplicity --verbosity 9 --encrypt-key FFFFFFFF --sign-key FFFFFFFF --gpg-opti ons= --exclude-globbing-filelist /root/.duply/some/exclude /var/www imaps://someone%40g ooglemail.com:somepass@imap.googlemail.com Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1236, in ? with_tempdir(main) File ""/usr/bin/duplicity"", line 1229, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1207, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 416, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 315, in write_multivol (tdp, dest_filename))) File ""/usr/lib/python2.4/site-packages/duplicity/asyncscheduler.py"", line 148, in schedule _task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.4/site-packages/duplicity/asyncscheduler.py"", line 175, in __run_sy nchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 314, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest _filename), File ""/usr/bin/duplicity"", line 240, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.4/site- packages/duplicity/backends/imapbackend.py"", line 131, in pu t body=self._prepareBody(f,remote_filename) File ""/usr/lib/python2.4/site- packages/duplicity/backends/imapbackend.py"", line 103, in _p repareBody mp = email.MIMEMultipart.MIMEMultipart() AttributeError: 'module' object has no attribute 'MIMEMultipart' ```",6 118022268,2009-12-28 01:16:52.775,IMAP Backend Error in put() if manifest-file larger than --volsize (lp:#500901),"[Original report](https://bugs.launchpad.net/bugs/500901) created by **Mickey Knox (mickey-knox)** ``` Hi, im still trying to get duplicity to work with imap backend with a googlemail.com account. Unfortunatly i receive the following error: duplicity 0.6.06 python 2.5.2 Args: /usr/bin/duplicity --verbosity 9 --encrypt-key 7FFFAFFF --sign-key FFFFFF9 C --gpg-options= --exclude-globbing-filelist /root/.duply/someconf/exclude / imaps:/ /someone%40googlemail.com:somepass@imap.gmail.com/somedir/ Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1236, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1229, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1207, in main full_backup(col_stats) File ""/usr/bin/duplicity"", line 416, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 315, in write_multivol (tdp, dest_filename))) File ""/usr/lib/python2.5/site-packages/duplicity/asyncscheduler.py"", line 148, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.5/site-packages/duplicity/asyncscheduler.py"", line 175, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 314, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest_filename), File ""/usr/bin/duplicity"", line 240, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.5/site- packages/duplicity/backends/imapbackend.py"", line 135, in put self._conn.append(globals.imap_mailbox, None, None, body) File ""/usr/lib/python2.5/imaplib.py"", line 318, in append return self._simple_command(name, mailbox, flags, date_time) File ""/usr/lib/python2.5/imaplib.py"", line 1055, in _simple_command return self._command_complete(name, self._command(name, *args)) File ""/usr/lib/python2.5/imaplib.py"", line 892, in _command_complete raise self.error('%s command error: %s %s' % (name, typ, data)) error: APPEND command error: BAD ['Could not parse command'] ```",12 118018456,2009-12-21 04:03:48.591,Crash when restoring data KeyError (lp:#498933),"[Original report](https://bugs.launchpad.net/bugs/498933) created by **Michael Terry (mterry)** ``` Binary package hint: deja-dup 1) Ubuntu 9.10 2) Deja Dup 10.2-0ubuntu1.1 3) I expected my backup to be restored 4) Deja Dup crashed instead. I just tried to restore the backup I made, and it crashes repeatedly when trying to restore. I have it setup to backup my home directory, and tried restoring to both the original location as well as a completely different folder. Both lead to the same error message. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 825, in with_tempdir(main) File ""/usr/bin/duplicity"", line 818, in with_tempdir fn() File ""/usr/bin/duplicity"", line 775, in main restore(col_stats) File ""/usr/bin/duplicity"", line 436, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 521, in Write_ROPaths ITR(ropath.index, ropath) File ""/usr/lib/python2.6/dist-packages/duplicity/lazy.py"", line 336, in __call__ last_branch.fast_process, args) File ""/usr/lib/python2.6/dist-packages/duplicity/robust.py"", line 38, in check_common_error return function(*args) File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 574, in fast_process ropath.copy(self.base_path.new_index(index)) File ""/usr/lib/python2.6/dist-packages/duplicity/path.py"", line 412, in copy other.writefileobj(self.open(""rb"")) File ""/usr/lib/python2.6/dist-packages/duplicity/path.py"", line 574, in writefileobj buf = fin.read(_copy_blocksize) File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 199, in read if not self.addtobuffer(): File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 224, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 331, in next self.set_tarfile() File ""/usr/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 320, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 472, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 15 ProblemType: Bug Architecture: i386 Date: Sun Dec 20 19:59:06 2009 DistroRelease: Ubuntu 9.10 ExecutablePath: /usr/bin/deja-dup InstallationMedia: Ubuntu 9.10 ""Karmic Koala"" - Release i386 (20091028.5) Package: deja-dup 10.2-0ubuntu1.1 ProcEnviron: LANG=en_US.UTF-8 SHELL=/bin/bash ProcVersionSignature: Ubuntu 2.6.31-16.53-generic SourcePackage: deja-dup Uname: Linux 2.6.31-16-generic i686 ``` Original tags: apport-bug i386",84 118018450,2009-12-09 19:26:20.504,Should not exit during restore when timeout ocours (lp:#494677),"[Original report](https://bugs.launchpad.net/bugs/494677) created by **Tokuko (launchpad-net-tokuko)** ``` Hi, humyo.de seems to be a bit buggy sometimes and in some cases seems to timeout. A restore should not fail just because one timeout occoured, duplicity should just retry the retrieve the file. The file being restored is severall gigabytes (a VMWare harddisk), so having to restart the restore results in a couple of gigabytes additional download. Regards, Tokudan duplicity 0.6.06 Python 2.6.2 DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION=""Ubuntu 9.04"" local Filesystem: Filesystem Size Used Avail Use% Mounted on /dev/sda3 219G 8.9G 199G 5% / /dev/sda3 on / type ext3 (rw,relatime) Traceback: Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1236, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1229, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1183, in main restore(col_stats) File ""/usr/local/bin/duplicity"", line 538, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 518, in Write_ROPaths for ropath in rop_iter: File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 491, in integrate_patch_iters final_ropath = patch_seq2ropath(normalize_ps(patch_seq)) File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 471, in patch_seq2ropath misc.copyfileobj(current_file, tempfp) File ""/usr/local/lib/python2.6/dist-packages/duplicity/misc.py"", line 166, in copyfileobj buf = infp.read(blocksize) File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 198, in read if not self.addtobuffer(): File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 223, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 330, in next self.set_tarfile() File ""/usr/local/lib/python2.6/dist-packages/duplicity/patchdir.py"", line 319, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/local/bin/duplicity"", line 575, in get_fileobj_iter manifest.volume_info_dict[vol_num]) File ""/usr/local/bin/duplicity"", line 596, in restore_get_enc_fileobj backend.get(filename, tdp) File ""/usr/local/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 234, in get response = self.request(""GET"", url) File ""/usr/local/lib/python2.6/dist- packages/duplicity/backends/webdavbackend.py"", line 108, in request response = self.conn.getresponse() File ""/usr/lib/python2.6/httplib.py"", line 950, in getresponse response.begin() File ""/usr/lib/python2.6/httplib.py"", line 390, in begin version, status, reason = self._read_status() File ""/usr/lib/python2.6/httplib.py"", line 348, in _read_status line = self.fp.readline() File ""/usr/lib/python2.6/socket.py"", line 395, in readline data = recv(1) File ""/usr/lib/python2.6/ssl.py"", line 96, in self.recv = lambda buflen=1024, flags=0: SSLSocket.recv(self, buflen, flags) File ""/usr/lib/python2.6/ssl.py"", line 222, in recv raise x SSLError: The read operation timed out ```",8 118022485,2009-12-03 16:13:14.647,Resuming an interrupted backup appears successful - but can’t restore from it (0.6.06) (lp:#491971),"[Original report](https://bugs.launchpad.net/bugs/491971) created by **Matthew Twomey (mtwomey)** ``` I'm doing backups using openssh scp/sftp to a remote host. When a full backup is disrupted in the middle of transferring and then restarted - Duplicity starts on the next volume number *without* finishing the files that were supposed to be in the previous volume. That is to say: *Normal non-disrupted backup* Volume 1 = files 01 - 10 Volume 2 = files 11 - 20 Volume 3 = files 21 - 30 *Backup which was disrupted during network transfer of Volume 2 and then restarted (Restarting after volume 2"" was the message in the log)* Volume 1 = files 01 - 10 Volume 2 = files 11 - 13 (or wherever it was disrupted) Volume 3 = files 21 - 30 I've noticed that after a disruption, if I go in and delete the last volume file (the one which was disrupted) on the remote location and then restart the backup - it picks up with that volume, producing a usable backup. If I don't delete the file, the restore process will error out when it get to the volume that was disrupted. Duplicity appears to expect temporary filenames to be used by the OS/transfer software during transfer, which openssh does not use by default. ```",42 118017933,2009-11-24 17:31:01.301,"Restore fails with ""Invalid data - SHA1 hash mismatch"" (lp:#487720)","[Original report](https://bugs.launchpad.net/bugs/487720) created by **Andrew Fister (andrewfister)** ``` When restoring a backup, one might see an error like: Invalid data - SHA1 hash mismatch: Calculated hash: 0b2bc4c2fb98b36f9891f9172f909d70ab5662e9 Manifest hash: 11cd330357618de52e4e5361a6e63b09ee951ae2 This can happen when a volume file was not completely written to the backend before duplicity was interrupted (say, shutting down the machine or whatever). When duplicity resumes the backup next run, it will start with the next volume. The half-complete volume file will sit on the backend and cause this error later when restoring. You can manually recover from this by either restoring from your older backup sets or by restoring individual files that don't happen to be in the corrupted volume. == To Reproduce == See attachment https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/487720/+attachment/2159465/+files/test.sh for a test script tor reproduce the problem. == Ubuntu SRU Justification == This is a serious data loss problem for users, which won't be noticed until they try to restore. With Ubuntu 11.10 including Deja Dup, some users may think to back up their data first then upgrade, and may accidentally create corrupted backups. ``` Original tags: verification-done-lucid verification-done-natty verification-needed",204 118019258,2009-10-19 00:55:59.311,"error verbosity: include filename in output, make traceback less prominent (lp:#455081)","[Original report](https://bugs.launchpad.net/bugs/455081) created by **az (az-debian)** ``` i've got a backup of a large samba share which every now and then fails, apparently because of files being modified/truncated while duplicity is accessing them: duplicity dies with a permission denied on reading (all fair and square). the bug here is that the exception info is extremely verbose wrt. duplicity's calling stack but completely lacks information useful to the user - like the name/path of the file that causes the problem. the information given with the default verbosity level should at least include the relevant information (like the path here), and ideally not expose too much of the innards to an end user - the traceback isn't useful unless you're a duplicity developer and might be restricted to higher verbosity levels. here's the relevant output: --- Local and Remote metadata are synchronized, no sync needed. Last full backup date: Fri Oct 9 23:25:48 2009 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1241, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1234, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1216, in main incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 488, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 295, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.5/site-packages/duplicity/gpg.py"", line 282, in GPGWriteFile data = block_iter.next(min(block_size, bytes_to_go)).data File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 510, in next result = self.process(self.input_iter.next(), size) File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 636, in process data, last_block = self.get_data_block(fp, size - 512) File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 663, in get_data_block buf = fp.read(read_size) File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 420, in read buf = self.infile.read(length) File ""/usr/lib/python2.5/site-packages/duplicity/diffdir.py"", line 389, in read buf = self.infile.read(length) IOError: [Errno 13] Permission denied --- ```",6 118019427,2009-10-14 01:28:10.899,wishlist: differential backups (lp:#450885),"[Original report](https://bugs.launchpad.net/bugs/450885) created by **az (az-debian)** ``` this is a forward/copy of debian bug #550698 (which lives here: http://bugs.debian.org/550698) where the requester asks for differential backups to shorten the backup chain. the requester, however, also asks for full control over which earlier backup to use as base (for example a differential against an incremental backup) which i don't believe makes much sense. here's a copy of the request: ---cut--- I like how duplicity works, but having overlong backup chains to conserve bandwidth/space puts too much pressure on the reliability of the remote destination. I would like to keep the chains shorter, but can't due to space constraints as the only alternative is a periodic full backup. Is there a reason as of why ""incremental"" could not accept a ""-t"" argument to specify which archive to use as the previous archive and let me manage the storage by myself? If I had a time or filename specifier like this, I could implement a differential backup script on top of duplicity without any special handling on duplicity's part, much in the same way as you implement differential tar archives. As long as ""restore"" walks the backup chain from the most recent volume, it should also work unchanged. I've seen some old comments in the duplicity mailing list, but no proper feature request. I'm also filing this here for other Debian users to see, since the inability to perform differential archives kinds-of defeats the initial space gain you have with duplicicy. ---cut--- ```",42 118018439,2009-10-09 19:04:09.959,Missing volume file is not detected properly (lp:#447480),"[Original report](https://bugs.launchpad.net/bugs/447480) created by **Kenneth Loafman (kenneth-loafman)** ``` When I do a backup ( local folder to local folder, any options ) and delete one of the archive files ( e.g. ""duplicity- full.20091008T074524Z.vol2.difftar.gpg"" ) another call of duplicity will add some files, but not the data that has been deleted before ( e.g. 25 MB are lost ). This leaves an incomplete backup folder which doesn`t seem right to me. After doing a backup with duplicity without it spitting out an error message the backup archive is expected to be sane / complete. ```",20 118019254,2009-09-17 20:27:21.427,Allow restricting list-current-files (lp:#432104),"[Original report](https://bugs.launchpad.net/bugs/432104) created by **Michael Terry (mterry)** ``` When calling list-current-files, it would be nice if one could pass a regexp for either files one is interested in or files one is not interested in. Maybe reuse the exclude/include arguments? I could grep the output, but I suspect that's wasting time/resources to list everything then exclude. ```",20 118019253,2009-09-17 20:10:33.941,"Wishlist: Report size of backup (collection-status,list-files,...) (lp:#432092)","[Original report](https://bugs.launchpad.net/bugs/432092) created by **Michael Terry (mterry)** ``` When restoring, it would be nice to know the size of the backup. I know this is difficult because everything is gzipped up. But I imagine either there's a decent estimation that could be provided (the reported size doesn't need to be 100% accurate, but hopefully errs on the side of over- reporting), or duplicity could keep track via its metadata as incrementals are added. Once duplicity knows this, it can give an error if the destination is not large enough to contain it. Additionally, it would be nice if it was reported in, say, a collection- status so that wrappers like Deja Dup could meaningfully use it. ```",52 118019247,2009-09-17 07:31:08.669,Add to deja-dup wake on lan functionality (lp:#431219),"[Original report](https://bugs.launchpad.net/bugs/431219) created by **emilio (emiliomaggio)** ``` Another item that I would like to add to the wish-list. I have recently installed a low-power home server that is the target of my backups. In general (As many users I guess) I am not at home for most of the day and to save even more power (and the planet :-) ) I can set up the server to go in S3 sleep mode if no processes are running and I can manually wake-on-lan whenever I want the server up and running. In case of duplicity I should not care to know when the backup is happening and it would be great if duplicity could do the wake-up for me. ```",40 118018418,2009-08-18 21:58:20.065,incomplete chown() error msgs (lp:#415619),"[Original report](https://bugs.launchpad.net/bugs/415619) created by **az (az-debian)** ``` this is a forward/copy of debian bug #541788 (http://bugs.debian.org/cgi- bin/bugreport.cgi?bug=541788): --- Package: duplicity Version: 0.5.16-1~bpo50+1 Severity: normal While running a test restore, duplicity reported these errors: Error '[Errno 1] Operation not permitted: 'git/export'' processing export Error '[Errno 1] Operation not permitted: 'git/hover.git/git-daemon-export- ok'' processing hover.git/git-daemon-export-ok Error '[Errno 1] Operation not permitted: 'git/irssi.git/git-daemon-export- ok'' processing irssi.git/git-daemon-export-ok Error '[Errno 1] Operation not permitted: 'git'' processing . The files in question are owned by root on the source machine, the duplicity restore test was run on another machine using a non-root account. I am guessing that the the reported failed operation is chown() as the files still belong to the user running the restore. Duplicity should log which operation failed, including all details, e.g. ""setting owner to root:root"" or similar. --- ```",62 118019168,2009-08-06 08:34:52.749,Support http_proxy (lp:#409739),"[Original report](https://bugs.launchpad.net/bugs/409739) created by **Michael Terry (mterry)** ``` Binary package hint: deja-dup I'm using my own S3 server. deja-dup does not support setting the S3 server (i.e., it appears to be hard coded to use aws.amazon.com). Generally this is not a problem if the program would respect the http proxy setting, however, deja-dup does not appear to do this. When I run deja-dup as follows: http_proxy=http://localhost:8080 deja-dup it fails and I do not see any access attempts in my S3 server's logs. Thanks. ProblemType: Bug Architecture: i386 DistroRelease: Ubuntu 9.04 ExecutablePath: /usr/bin/deja-dup Package: deja-dup 7.4-0ubuntu2 ProcEnviron: SHELL=/bin/bash PATH=(custom, user) LANG=en_US.UTF-8 SourcePackage: deja-dup Uname: Linux 2.6.28-14-generic i686 ``` Original tags: apport-bug i386",14 118021737,2017-02-16 12:25:30.302,"On full drive, backup directory got deleted (lp:#1665327)","[Original report](https://bugs.launchpad.net/bugs/1665327) created by **Anes Lihovac (anes-lihovac-gmail)** ``` The other day, my harddrive containing my backups, was full, and of course the backup triggered via dejadup would fail, which is configured to run daily. So the next day when the backup started again, I cancled it, only to find out later that the directory containing my backups was empty. All my backups were gone. ```",6 118021734,2017-02-12 21:27:16.340,Don't require email and password in hubic backend except for first time (lp:#1664063),"[Original report](https://bugs.launchpad.net/bugs/1664063) created by **Pablo Castellano (pablocastellano)** ``` When you configure Duplicity using Hubic backend you are required to create ~/.hubic_credentials with login data (email and password) and api data (client_id, client_secret, redirect_uri. More info here: http://duplicity.nongnu.org/duplicity.1.html#sect16 The first time you run duplicity, it will generate a ~/.hubic_tokens file with two new values named access_token and refresh_token. Since now you email and password are not used anymore but they are still present in the configuration file. This is insecure because a malicious user could read this file and compromise your whole hubic account. Proposed workaround: Once duplicity has obtained the tokens, set email and password to blank or random data ```",6 118021728,2017-02-02 19:59:21.927,Restore fails with sha1 error. Backups won't restore. (lp:#1661373),"[Original report](https://bugs.launchpad.net/bugs/1661373) created by **michaelcole (8-launchpad-michaelcole-com)** ``` This is similar to some bugs from 2009 which have fixes deployed in Duplicity 0.6. https://bugs.launchpad.net/deja-dup/+bug/487720 I saw GitLab melt down yesterday and wanted to check my backups would restore. I'm getting a sha1 error on restore. ubuntu GNOME 16.04 duplicity 0.7.06 deja-dup 34.2 Python 2.7.12 Deja-Dup is configured to push backups to AWS. I did a full backup and 4 incremental ones. If I restore the most recent backup, it restores some files, then errors. I pasted the error here, but don't know where the logs are. It's reproducible. I won't fuss with it for a few days. Invalid data - SHA1 hash mismatch for file: duplicity-inc.20170201T141348Z.to.20170202T133556Z.vol7.difftar.gpg Calculated hash: da39a3ee5e6b4b0d3255bfef95601890afd80709 Manifest hash: d9adcb9246f4763e1d6345597b0f173cc4038f70 Hey, this is a big deal for me because backups that can't restore are worse than no backups. ```",6 118021726,2017-01-01 06:36:09.934,Incorrect passphrase due to incorrect subkey use (lp:#1653406),"[Original report](https://bugs.launchpad.net/bugs/1653406) created by **Kapitan (aric81)** ``` Hello folks! With the following on my Ubuntu machine: duplicity 0.7.11 gpg (GnuPG) 2.1.11 libgcrypt 1.6.5 I am trying to execute: /usr/bin/duplicity full -v4 --full-if-older-than 30D --exclude-filelist ~/.backup/HomeBackup.duplicity --gpg-options ""--verbose --verify-options no-show-photos"" --encrypt-key=XXXXXXXX --sign-key=YYYYYYYY --exclude- device-files /home pexpect+scp://user@host.com//home/user/backup/home Note the different keys, neither X nor Y is a master key (both subkeys) The problem is that none of my passphrase is working and the command always fails: ===== Begin GnuPG log ===== gpg: using subkey YYYYYYYY instead of primary key MASTER-KEY gpg: no default secret key: bad passphrase gpg: [stdin]: sign+encrypt failed: bad passphrase ===== End GnuPG log ===== I investigated using gpg2 --edit and passwd and basically it seems that the script uses the passphrase associated with the master key. I am open to further investigation and thanks! ``` Original tags: gpg",6 118019019,2016-12-24 01:06:32.292,Undescriptive duplicity/collection-status error when the backup directory contains two volumes with different file names and same volume number in the same backup set (lp:#1652410),"[Original report](https://bugs.launchpad.net/bugs/1652410) created by **Naël (nathanael-naeri)** ``` [System] Ubuntu 16.04 deja-dup 34.2-0ubuntu1.1 duplicity 0.7.06-2ubuntu2 [Symptoms] When the backup location unfortunately contains two backup volumes with different file names and same volume number in the same backup set, for instance: duplicity-full.20161129T015237Z.vol1.difftar duplicity-full.20161129T015237Z.vol1.difftar.gz this confuses duplicity collection-status, which ends up returning an undescriptive Python assertion error, as seen in this Déjà-Dup log file: DUPLICITY: INFO 1 DUPLICITY: . Args: /usr/bin/duplicity collection-status [...] [...] DUPLICITY: DEBUG 1 DUPLICITY: . 12 files exist on backend DUPLICITY: DEBUG 1 DUPLICITY: . Extracting backup chains from list of files: [u'duplicity-full.20161129T015237Z.vol1.difftar', u'duplicity-full.20161129T015237Z.manifest', u'duplicity-full.20161129T015237Z.vol1.difftar.gz', u'duplicity-full-signatures.20161129T015237Z.sigtar.gz', u'duplicity-full-signatures.20161129T015237Z.sigtar', [...] DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity-full.20161129T015237Z.vol1.difftar is not part of a known set; creating new set DUPLICITY: DEBUG 1 DUPLICITY: . File duplicity-full.20161129T015237Z.manifest is part of known set DUPLICITY: ERROR 30 AssertionError [...] DUPLICITY: . File ""/usr/lib/python2.7/dist- packages/duplicity/collections. py"", line 105, in add_filename(self.volume_name_dict, filename) DUPLICITY: . AssertionError: ({1: 'duplicity-full.20161129T015237Z.vol1.difftar'}, 'duplicity-full.20161129T015237Z.vol1.difftar.gz') What happens is that duplicity collection-status takes the uncompressed duplicity-full.20161129T015237Z.vol1.difftar for the start of a backup set, then tries to add the compressed duplicity- full.20161129T015237Z.vol1.difftar.gz to this set, and fails because the volume number of this file has already been added to the set. Otherwise there would be two backup volumes with the same volume number in the backup set. Note that a similar issue may also happen for file signatures instead of backup volumes, e.g.: duplicity-full-signatures.20161129T015237Z.sigtar duplicity-full-signatures.20161129T015237Z.sigtar.gz but backup volumes appear to be tripped on first, perhaps because of alphabetic order. Note also that under normal operation, the backup location isn't supposed to contain a mixed of compressed and uncompressed files (or encrypted and unencrypted files), but this situation was still reported by the bug reporter in the original bug report. [Test case] See comment 19, written for Déjà-Dup, but easily adaptable to pure duplicity I think. [Ideas for fixing] Duplicity already has checks to avoid considering files whose names don't look like they could be part of a backup set (see comment 19, point 4). Perhaps this filename filter could be improved on so that duplicity doesn't burp so hard when a backup volume is present in both compressed and uncompressed forms? Perhaps it could have duplicity prefer a particular filename when there are two volumes with the same number in the same backup set? But then which one and on what grounds? Please also see comment 23. [Easier fix] Worst case, if this situation can't be handled automatically and the situation requires a human to examine the contents of the backup repository to take adequate action, then it would be helpful that duplicity return a more descriptive message than the current terse assertion error. ``` Original tags: apport-bug testcase xenial",28 118019404,2016-12-23 08:47:26.187,rdiff: Compute diffs in parallel (lp:#1652249),"[Original report](https://bugs.launchpad.net/bugs/1652249) created by **az (az-debian)** ``` this is a forward of debian bug 848950, which lives over there: http://bugs.debian.org/848950 the original reporter requests an improvement to rdiffdir, namely to parallelise diff computation. ```",6 118021710,2016-12-20 12:37:51.712,upload to S3 with multipart processing fails (lp:#1651430),"[Original report](https://bugs.launchpad.net/bugs/1651430) created by **Nick (n6ck)** ``` Hi, we are using Amazon S3 to upload our backups with duplicity. Recently we got more files to backup and the backup suddenly failed. I investigated and found that it happened due to this bug https://bugs.launchpad.net/duplicity/+bug/385495. To workaround this, I changed our backup command to use multipart processing. My command line now looks like this: """""" env PASSPHRASE='' duplicity --log-file /var/log/duplicity/duplicity.log --verbosity info --archive-dir=/var/tmp/duplicity --include-filelist --encrypt-key '0xC9BBFE6A' --encrypt-key '0x3247FB5E' --encrypt-key '0xFA992415' --no-print-statistics --gpg-options '--trust- model=always' --full-if-older-than 3D --allow-source-mismatch --s3-use-new- style --s3-use-multiprocessing --s3-multipart-chunk-size 104857600 s3://s3-eu-central-1.amazonaws.com// """""" Sadly the upload of the first volume fails now with this output: """""" Writing duplicity-full.20161220T121506Z.vol1.difftar.gpg Uploading s3://s3-eu-central-1.amazonaws.com///duplicity- full.20161220T121506Z.vol1.difftar.gpg to STANDARD Storage Traceback (most recent call last): File ""/usr/lib/python2.7/dist- packages/duplicity/backends/_boto_multi.py"", line 205, in _upload num_cb=max(2, 8 * bytes / (1024 * 1024)) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/multipart.py"", line 260, in upload_part_from_file query_args=query_args, size=size) File ""/usr/local/lib/python2.7/dist-packages/boto/s3/key.py"", line 1225, in set_contents_from_file fp.seek(0, os.SEEK_END) File ""/usr/lib/python2.7/dist-packages/duplicity/filechunkio.py"", line 47, in seek self.seek(self.bytes + offset) File ""/usr/lib/python2.7/dist-packages/duplicity/filechunkio.py"", line 43, in seek super(FileChunkIO, self).seek(self.offset + offset) IOError: [Errno 22] Invalid argument """""" I was using duplicity version '0.7.10-0ubuntu0ppa1240~ubuntu14.04.1', but also tried the most current daily built version '0.7.10-0ubuntu0ppa1240~ubuntu14.04.1'. We are using Ubuntu 14.04 with python 2.7.6. Can anyone help with this bug? ```",14 118021709,2016-12-17 01:26:01.892,[needs-packaging] MEGA Sync - web hosting (lp:#1650698),"[Original report](https://bugs.launchpad.net/bugs/1650698) created by **Szymon Scholz (quomoow)** ``` MEGA provides free 50GB per account space in his filehost. URL: https://github.com/meganz/MEGAsync License: Custom, Open Source ``` Original tags: needs-packaging",6 118019395,2016-11-30 17:44:37.540,Adding par2 to existing storage fails (lp:#1646193),"[Original report](https://bugs.launchpad.net/bugs/1646193) created by **moredread (moredread)** ``` I tried to add par2 support to an existing backup storage (i.e. there are already backups there), but it fails. Using a new storage folder works fine. Not sure if my use case is supported, but it'd be nice to be able to add par2 for newer backups. Duplicity version: 0.7.10 Duply version: 2.0.1 Python version: Python 2.7.9 OS Distro and version: Debian stable Type of target filesystem: Linux Log output from -v9 option: Start duply v2.0.1, time is 2016-11-30 18:33:51. Using profile '/etc/duply/server-new-dh'. Using installed duplicity version 0.7.10, python 2.7.9, gpg 1.4.18 (Home: ~/.gnupg), awk 'GNU Awk 4.1.1, API: 1.1 (GNU MPFR 3.1.2-p3, GNU MP 6.0.0)', grep 'grep (GNU grep) 2.20', bash '4.3.30(1)-release (x86_64-pc-linux- gnu)'. Autoset found secret key of first GPG_KEY entry 'XXXXXXXX' for signing. Checking TEMP_DIR '/tmp' is a folder and writable (OK) Test - Encrypt to 'XXXXXXXX' & Sign with 'XXXXXXXX' (OK) Test - Decrypt (OK) Test - Compare (OK) Cleanup - Delete '/tmp/duply.2276.1480527231_*'(OK) --- Start running command PRE at 18:33:52.119 --- Skipping n/a script '/etc/duply/server-new-dh/pre'. --- Finished state OK at 18:33:52.158 - Runtime 00:00:00.039 --- --- Start running command BKP at 18:33:52.185 --- Using archive dir: /root/.cache/duplicity/duply_server-new-dh Using backup name: duply_server-new-dh Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Failed: the scheme ftp already has a backend associated with it Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.sshbackend Failed: 'module' object has no attribute 'ssh_backend' Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.webdavbackend Succeeded Setting multipart boto backend process pool to 4 processes Reading globbing filelist /etc/duply/server-new-dh/exclude Main action: inc ================================================================================ duplicity 0.7.10 (August 20, 2016) Args: /usr/local/bin/duplicity --name duply_server-new-dh --encrypt-key XXXXXXXX --sign-key XXXXXXXX --verbosity 9 --gpg-options --compress- algo=bzip2 --bzip2-compress-level=9 --personal-cipher-preferences AES256 --full-if-older-than 1M --volsize 100 --asynchronous-upload --s3-use- multiprocessing --par2-redundancy 15 --exclude-filelist /etc/duply/server- new-dh/exclude / par2+s3://objects-us-west-1.dream.io/foobar Linux foobar 3.18.36 #1 SMP Thu Jun 30 15:35:47 CEST 2016 x86_64 /usr/bin/python2 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2] ================================================================================ Using temporary directory /tmp/duplicity-4qKW9b-tempdir Registering (mkstemp) temporary file /tmp/duplicity-4qKW9b-tempdir/mkstemp- Bh5MFV-1 Temp has 34918731776 available, backup will use approx 241172480. Listed s3://objects-us-west-1.dream.io/foobar/duplicity-full- signatures.20151213T051406Z.sigtar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity-full- signatures.20160617T045203Z.sigtar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity-full- signatures.20160718T041406Z.sigtar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity-full- signatures.20160818T041405Z.sigtar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity-full- signatures.20161124T155241Z.sigtar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.manifest.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol1.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol10.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol11.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol12.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol13.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol14.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol15.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol16.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol17.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol18.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol19.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol2.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol20.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol21.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol22.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol23.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol24.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol25.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol26.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol27.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol28.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol29.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol3.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol30.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol31.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol32.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol33.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol34.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol35.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol36.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol37.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol38.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol39.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol4.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol40.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol41.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol42.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol43.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol44.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol45.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol46.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol47.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol48.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol49.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol5.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol50.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol51.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol52.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol53.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol54.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol55.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol56.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol57.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol58.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol59.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol6.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol60.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol61.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol62.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol63.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol64.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol65.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol66.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol67.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol68.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol69.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol7.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol70.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol71.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol72.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol73.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol74.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol75.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol8.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20151213T051406Z.vol9.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.manifest.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol1.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol10.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol11.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol12.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol13.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol14.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol15.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol16.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol17.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol18.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol19.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol2.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol20.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol21.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol22.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol23.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol24.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol25.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol26.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol27.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol28.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol29.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol3.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol30.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol31.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol32.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol33.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol34.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol35.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol36.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol37.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol38.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol39.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol4.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol40.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol41.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol42.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol43.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol44.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol45.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol46.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol47.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol48.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol49.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol5.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol50.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol51.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol52.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol53.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol54.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol55.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol56.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol57.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol58.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol59.difftar.gpg Listed s3://objects-us-west-1.dream.io/foobar/duplicity- full.20160617T045203Z.vol6.difftar.gpg ... Incremental Fri Jun 24 06:14:04 2016 1 Incremental Sat Jun 25 06:14:06 2016 1 Incremental Sun Jun 26 06:14:04 2016 1 Incremental Mon Jun 27 06:14:05 2016 1 Incremental Tue Jun 28 06:14:05 2016 1 Incremental Wed Jun 29 06:14:05 2016 1 Incremental Thu Jun 30 06:14:05 2016 1 Incremental Fri Jul 1 06:14:05 2016 1 Incremental Sat Jul 2 06:14:05 2016 1 Incremental Sun Jul 3 06:14:05 2016 1 Incremental Mon Jul 4 06:14:05 2016 1 Incremental Tue Jul 5 06:14:05 2016 1 Incremental Wed Jul 6 06:14:07 2016 1 Incremental Thu Jul 7 06:14:05 2016 1 Incremental Fri Jul 8 06:14:06 2016 1 Incremental Sat Jul 9 06:14:06 2016 2 Incremental Sun Jul 10 06:14:05 2016 1 Incremental Mon Jul 11 06:14:05 2016 1 Incremental Tue Jul 12 06:14:05 2016 1 Incremental Wed Jul 13 06:14:07 2016 1 Incremental Thu Jul 14 06:14:05 2016 1 Incremental Fri Jul 15 06:14:05 2016 4 Incremental Sat Jul 16 06:14:05 2016 1 Incremental Sun Jul 17 06:14:05 2016 1 ------------------------- Secondary chain 3 of 4: ------------------------- Chain start time: Mon Jul 18 06:14:06 2016 Chain end time: Wed Aug 17 06:14:05 2016 Number of contained backup sets: 31 Total number of contained volumes: 113 Type of backup set: Time: Num volumes: Full Mon Jul 18 06:14:06 2016 79 Incremental Tue Jul 19 06:14:05 2016 2 Incremental Wed Jul 20 06:14:05 2016 1 Incremental Thu Jul 21 06:14:04 2016 1 Incremental Fri Jul 22 06:14:05 2016 1 Incremental Sat Jul 23 06:14:05 2016 2 Incremental Sun Jul 24 06:14:05 2016 1 Incremental Mon Jul 25 06:14:05 2016 2 Incremental Tue Jul 26 06:14:05 2016 1 Incremental Wed Jul 27 06:14:05 2016 1 Incremental Thu Jul 28 06:14:08 2016 1 Incremental Fri Jul 29 06:14:05 2016 1 Incremental Sat Jul 30 06:14:06 2016 1 Incremental Sun Jul 31 06:14:07 2016 1 Incremental Mon Aug 1 06:14:05 2016 1 Incremental Tue Aug 2 06:14:05 2016 1 Incremental Wed Aug 3 06:14:05 2016 1 Incremental Thu Aug 4 06:14:05 2016 1 Incremental Fri Aug 5 06:14:04 2016 1 Incremental Sat Aug 6 06:14:05 2016 1 Incremental Sun Aug 7 06:14:07 2016 1 Incremental Mon Aug 8 06:14:04 2016 1 Incremental Tue Aug 9 06:14:05 2016 1 Incremental Wed Aug 10 06:14:04 2016 1 Incremental Thu Aug 11 06:14:07 2016 2 Incremental Fri Aug 12 06:14:05 2016 1 Incremental Sat Aug 13 06:14:06 2016 1 Incremental Sun Aug 14 06:14:04 2016 1 Incremental Mon Aug 15 06:14:05 2016 1 Incremental Tue Aug 16 06:14:06 2016 1 Incremental Wed Aug 17 06:14:05 2016 1 ------------------------- Secondary chain 4 of 4: ------------------------- Chain start time: Thu Aug 18 06:14:05 2016 Chain end time: Wed Aug 31 06:14:05 2016 Number of contained backup sets: 14 Total number of contained volumes: 94 Type of backup set: Time: Num volumes: Full Thu Aug 18 06:14:05 2016 81 Incremental Fri Aug 19 06:14:05 2016 1 Incremental Sat Aug 20 06:14:10 2016 1 Incremental Sun Aug 21 06:14:13 2016 1 Incremental Mon Aug 22 06:14:32 2016 1 Incremental Tue Aug 23 06:14:05 2016 1 Incremental Wed Aug 24 06:14:05 2016 1 Incremental Thu Aug 25 06:14:05 2016 1 Incremental Fri Aug 26 06:14:06 2016 1 Incremental Sat Aug 27 06:14:09 2016 1 Incremental Sun Aug 28 06:14:05 2016 1 Incremental Mon Aug 29 06:14:05 2016 1 Incremental Tue Aug 30 06:14:04 2016 1 Incremental Wed Aug 31 06:14:05 2016 1 ------------------------- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Thu Nov 24 16:52:41 2016 Chain end time: Wed Nov 30 17:38:31 2016 Number of contained backup sets: 2 Total number of contained volumes: 87 Type of backup set: Time: Num volumes: Full Thu Nov 24 16:52:41 2016 85 Incremental Wed Nov 30 17:38:31 2016 2 ------------------------- No orphaned or incomplete backup sets found. Reuse configured PASSPHRASE as SIGN_PASSPHRASE Registering (mktemp) temporary file /tmp/duplicity-4qKW9b-tempdir/mktemp- EkZ084-2 Making directory /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting tree /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Selecting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 545, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/par2backend.py"", line 118, in get self.wrapped_backend._get(par2file.get_filename(), par2file) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 247, in _get self.pre_process_download(remote_filename, wait=True) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 298, in pre_process_download self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] IndexError: list index out of range Attempt 1 failed. IndexError: list index out of range Making directory /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting tree /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Selecting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 545, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/par2backend.py"", line 118, in get self.wrapped_backend._get(par2file.get_filename(), par2file) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 247, in _get self.pre_process_download(remote_filename, wait=True) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 298, in pre_process_download self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] IndexError: list index out of range Attempt 2 failed. IndexError: list index out of range Making directory /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting tree /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Selecting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 545, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/par2backend.py"", line 118, in get self.wrapped_backend._get(par2file.get_filename(), par2file) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 247, in _get self.pre_process_download(remote_filename, wait=True) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 298, in pre_process_download self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] IndexError: list index out of range Attempt 3 failed. IndexError: list index out of range Making directory /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting tree /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Selecting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 545, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/par2backend.py"", line 118, in get self.wrapped_backend._get(par2file.get_filename(), par2file) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 247, in _get self.pre_process_download(remote_filename, wait=True) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 298, in pre_process_download self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] IndexError: list index out of range Attempt 4 failed. IndexError: list index out of range Making directory /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting tree /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Selecting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Deleting /tmp/duplicity-4qKW9b-tempdir/duplicity_temp.1 Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 545, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/par2backend.py"", line 118, in get self.wrapped_backend._get(par2file.get_filename(), par2file) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 247, in _get self.pre_process_download(remote_filename, wait=True) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 298, in pre_process_download self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] IndexError: list index out of range Releasing lockfile /root/.cache/duplicity/duply_server-new-dh/lockfile.lock Removing still remembered temporary file /tmp/duplicity-4qKW9b-tempdir/mktemp-EkZ084-2 Removing still remembered temporary file /tmp/duplicity-4qKW9b-tempdir/mkstemp-Bh5MFV-1 --- Finished state FAILED 'code 50' at 18:36:01.605 - Runtime 00:02:09.419 --- --- Start running command POST at 18:36:01.650 --- Skipping n/a script '/etc/duply/server-new-dh/post'. --- Finished state OK at 18:36:01.689 - Runtime 00:00:00.038 --- ```",6 118021708,2016-11-25 15:40:50.617,Documentation: specify restore options (lp:#1644870),"[Original report](https://bugs.launchpad.net/bugs/1644870) created by **LKRaider (paul-eipper)** ``` In the man page and docs, it is not clear which options apply on restoration vs backup. The docs should specify which options can be used with each command. Each command should have separate --help outputs. ```",6 118021705,2016-11-25 15:37:10.572,Feature request: restore resume (lp:#1644869),"[Original report](https://bugs.launchpad.net/bugs/1644869) created by **LKRaider (paul-eipper)** ``` Currently if duplicity restore stops for some reason, the only option is to restart from beginning, which can take a long time (my connection drops usually in the middle of the night, meaning restoring gets interrupted every ~24h). Add a feature such that calling the same duplicity restore command again continues restoring the missing files on destination. ENV: $ duplicity --version duplicity 0.7.10 $ python2.7 --version Python 2.7.12 $ uname -a Darwin 13.4.0 Darwin Kernel Version 13.4.0: Mon Jan 11 18:17:34 PST 2016; root:xnu-2422.115.15~1/RELEASE_X86_64 x86_64 ```",8 118021701,2016-11-12 18:47:22.554,Crashes on initial backup with python NotImplementedError (lp:#1641338),"[Original report](https://bugs.launchpad.net/bugs/1641338) created by **Will Pimblett (wjdp)** ``` Command `duplicity --exclude-filelist=~/.backup_exclude -v8 $HOME amazondrive:///backup/frank-family` Returns the following ``` Using archive dir: /home/will/.cache/duplicity/bc43d4b7e72909b8f8e51e0a92de7c81 Using backup name: bc43d4b7e72909b8f8e51e0a92de7c81 Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.u1backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.amazondrivebackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Could not load OAuth2 token. Trying to create a new one. (original error: [Errno 2] No such file or directory: '/home/will/.duplicity_amazondrive_oauthtoken.json') In order to allow duplicity to access AmazonDrive, please open the following URL in a browser and copy the URL of the page you see after authorization here: ~~~ Reading filelist /home/will/.backup_exclude Sorting filelist /home/will/.backup_exclude Main action: inc ================================================================================ duplicity 0.6.23 (January 24, 2014) Args: /usr/bin/duplicity --exclude-filelist=~/.backup_exclude -v8 /home/family amazondrive:///backup/frank-family Linux frank 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:20:08 UTC 2016 i686 i686 /usr/bin/python 2.7.6 (default, Jun 22 2015, 18:00:18) [GCC 4.8.2] ================================================================================ Using temporary directory /tmp/duplicity-4WhJ6m-tempdir Temp has 998864809984 available, backup will use approx 34078720. Local and Remote metadata are synchronized, no sync needed. Last full backup date: none Collection Status ----------------- Connecting with backend: AmazonDriveBackend Archive directory: /home/will/.cache/duplicity/bc43d4b7e72909b8f8e51e0a92de7c81 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /home/will/.cache/duplicity/bc43d4b7e72909b8f8e51e0a92de7c81/duplicity- AUO4M3-tempdir Using temporary directory /home/will/.cache/duplicity/bc43d4b7e72909b8f8e51e0a92de7c81/duplicity-060O9H-tempdir AsyncScheduler: instantiating at concurrency 0 A . Error accessing possibly locked file /home/family/.bash_history Error accessing possibly locked file /home/family/.btsync Error accessing possibly locked file /home/family/.cache Error accessing possibly locked file /home/family/.config A .excluded_from_backup A .hplip A .hplip/hplip.conf Error accessing possibly locked file /home/family/.lesshst A .share A .share/hf A .share/hf/Documents A .share/hf/Music A .share/hf/Photos A .share/media A .share/media/.DS_Store A .share/media/Films A .share/media/Music A .share/media/Photos A .share/media/Recordings A .share/media/TV Shows A .share/util A .share/util/.DS_Store A .share/util/AbsoluteUninstaller.exe A .share/util/ChromeStandaloneSetup.exe AsyncScheduler: running task synchronously (asynchronicity disabled) Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1463, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 542, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 424, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 423, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/bin/duplicity"", line 314, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 397, in put raise NotImplementedError() NotImplementedError ``` ```",6 118021693,2016-11-08 14:39:42.382,acd_cli backend: Name collision with non-cached file (lp:#1640195),"[Original report](https://bugs.launchpad.net/bugs/1640195) created by **Juan Orti Alcaine (juan.orti)** ``` Hi, I have frequent errors uploading using the acd_cli backend. It seems that after a connection error like ""Read timed out"", the next tries fail with ""Name collision with non-cached file. If you want to overwrite, please sync and try again."" I guess that the backend should do a ""acd_cli sync"" before any subsequent tries to avoid this? I'm using Fedora 25 x86_64, with these packages: duplicity-0.7.10-1.fc25.x86_64 python-2.7.12-7.fc25.x86_64 --------------------------- This is a snip of the error: A mnt/btrfs/BorgBackup@DuplicityBackup/nologin-laptop/repo/data/0/2031 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 8, with output: [ ] 0.0% of 200MiB 0/1 -70.1MB/s 0s 1 file(s) failed. 16-11-08 14:32:50.630 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. RequestError: 1000, HTTPSConnectionPool(host='content- eu.drive.amazonaws.com', port=443): Read timed out. (read timeout=60). Attempt 1 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 8, with ou$ put: [ ] 0.0% of 200MiB 0/1 -70.1MB/s 0s 1 file(s) failed. 16-11-08 14:32:50.630 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. RequestError: 1000, HTTPSConnectionPool(host='content- eu.drive.amazonaws.com', port=443): Read timed out. (read timeout=60). Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 15.4MB/s 0s 1 file(s) failed. 16-11-08 14:33:36.932 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:33:36.988 [WARNING] [acd_cli] - Return value error code: 256. Attempt 2 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with ou$ put: [#########################] 100.0% of 201MiB 1/1 15.4MB/s 0s 1 file(s) failed. --------------------------- I launch duplicity with duply, this is the beginning of the run: Start duply v1.11.3, time is 2016-11-08 11:25:04. Using profile '/etc/duply/acd'. Using installed duplicity version 0.7.10, python 2.7.12, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.1)', grep 'grep (GNU grep) 2.26', bash '4.3.43(1)-release (x86_64-redhat-linux- gnu)'. Autoset found secret key of first GPG_KEY entry '' for signing. Checking TEMP_DIR '/tmp' is a folder and writable (OK) Test - Encrypt to & Sign with (OK) Test - Decrypt (OK) Test - Compare (OK) Cleanup - Delete '/tmp/duply.1471.1478600704_*'(OK) --- Start running command PRE at 11:25:05.164 --- Running '/etc/duply/acd/pre' - OK Output: Create a readonly snapshot of '/mnt/btrfs/libvirt-images' in '/mnt/btrfs/libvirt-images@DuplicityBackup' Create a readonly snapshot of '/mnt/btrfs/BorgBackup' in '/mnt/btrfs/BorgBackup@DuplicityBackup' Getting changes.. Inserting nodes. --- Finished state OK at 11:25:13.306 - Runtime 00:00:08.141 --- --- Start running command BKP at 11:25:13.314 --- Using archive dir: /root/backups/duply-cache/duply_acd Using backup name: duply_acd Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Reading results of 'acd_cli sync' Reading results of 'acd_cli ls '/Backups/xenon/duplicity/'' Reading globbing filelist /etc/duply/acd/exclude Main action: inc ================================================================================ duplicity 0.7.10 (August 20, 2016) Args: /usr/bin/duplicity --archive-dir /root/backups/duply-cache --name duply_acd --encrypt-key --encrypt-key --sign-key --verbosity 9 --full-if-older-than 6M --exclude-filelist /etc/duply/acd/exclude / acd+acdcli:///Backups/xenon/duplicity/ Linux xenon 4.8.6-300.fc25.x86_64 #1 SMP Tue Nov 1 12:36:38 UTC 2016 x86_64 x86_64 /usr/bin/python2 2.7.12 (default, Sep 29 2016, 12:52:02) [GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] ================================================================================ Using temporary directory /tmp/duplicity-9DdCo1-tempdir Registering (mkstemp) temporary file /tmp/duplicity-9DdCo1-tempdir/mkstemp- aXYQsN-1 Temp has 8326713344 available, backup will use approx 272629760. Reading results of 'acd_cli ls '/Backups/xenon/duplicity/'' Local and Remote metadata are synchronized, no sync needed. Reading results of 'acd_cli ls '/Backups/xenon/duplicity/'' 6416 files exist on backend 10 files exist in cache Extracting backup chains from list of files: [] File duplicity-new- signatures.20161101T221724Z.to.20161108T061533Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-new- signatures.20161101T221724Z.to.20161108T061533Z.sigtar.part' File duplicity-inc.20161101T221724Z.to.20161108T061533Z.manifest.part is not part of a known set; creating new set Processing local manifest /root/backups/duply-cache/duply_acd/duplicity- inc.20161101T221724Z.to.20161108T061533Z.manifest.part (67267) Found manifest volume 1 Found manifest volume 2 Found manifest volume 3 [...] --------------------------- And the las 200 lines: juan@juan-laptop:~/Descargas$ tail -n 200 duplicity.log Selection: examining path /mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2030 Selection: result: None from function: Command-line exclude glob: /root/.cache/** Selection: result: None from function: Command-line exclude glob: /root/.ccache/** Selection: result: None from function: Command-line exclude glob: /home/*/Descargas/** Selection: result: None from function: Command-line exclude glob: /home/*/.local/share/Trash/** Selection: result: None from function: Command-line exclude glob: /home/*/.cache/** Selection: result: None from function: Command-line exclude glob: /home/*/.ccache/** Selection: result: None from function: Command-line exclude glob: /mnt/backups/duply-cache/** Selection: result: None from function: Command-line include glob: /etc/** Selection: result: None from function: Command-line include glob: /root/** Selection: result: None from function: Command-line include glob: /var/spool/cron/** Selection: result: None from function: Command-line include glob: /home/** Selection: result: None from function: Command-line include glob: /opt/** Selection: result: None from function: Command-line include glob: /usr/local/** Selection: result: None from function: Command-line include glob: /mnt/backups/** Selection: result: None from function: Command-line include glob: /mnt/btrfs/libvirt-images@DuplicityBackup/** Selection: result: 1 from function: Command-line include glob: /mnt/btrfs/BorgBackup@DuplicityBackup/** Selection: + including file Selecting /mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2030 Comparing mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2030 and None Getting delta of (mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2030 reg) and None A mnt/btrfs/BorgBackup@DuplicityBackup/nologin-laptop/repo/data/0/2030 Selection: examining path /mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2031 Selection: result: None from function: Command-line exclude glob: /root/.cache/** Selection: result: None from function: Command-line exclude glob: /root/.ccache/** Selection: result: None from function: Command-line exclude glob: /home/*/Descargas/** Selection: result: None from function: Command-line exclude glob: /home/*/.local/share/Trash/** Selection: result: None from function: Command-line exclude glob: /home/*/.cache/** Selection: result: None from function: Command-line exclude glob: /home/*/.ccache/** Selection: result: None from function: Command-line exclude glob: /mnt/backups/duply-cache/** Selection: result: None from function: Command-line include glob: /etc/** Selection: result: None from function: Command-line include glob: /root/** Selection: result: None from function: Command-line include glob: /var/spool/cron/** Selection: result: None from function: Command-line include glob: /home/** Selection: result: None from function: Command-line include glob: /opt/** Selection: result: None from function: Command-line include glob: /usr/local/** Selection: result: None from function: Command-line include glob: /mnt/backups/** Selection: result: None from function: Command-line include glob: /mnt/btrfs/libvirt-images@DuplicityBackup/** Selection: result: 1 from function: Command-line include glob: /mnt/btrfs/BorgBackup@DuplicityBackup/** Selection: + including file Selecting /mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2031 Comparing mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2031 and None Getting delta of (mnt/btrfs/BorgBackup@DuplicityBackup/nologin- laptop/repo/data/0/2031 reg) and None A mnt/btrfs/BorgBackup@DuplicityBackup/nologin-laptop/repo/data/0/2031 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 8, with output: [ ] 0.0% of 200MiB 0/1 -70.1MB/s 0s 1 file(s) failed. 16-11-08 14:32:50.630 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. RequestError: 1000, HTTPSConnectionPool(host='content- eu.drive.amazonaws.com', port=443): Read timed out. (read timeout=60). Attempt 1 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 8, with output: [ ] 0.0% of 200MiB 0/1 -70.1MB/s 0s 1 file(s) failed. 16-11-08 14:32:50.630 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. RequestError: 1000, HTTPSConnectionPool(host='content- eu.drive.amazonaws.com', port=443): Read timed out. (read timeout=60). Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 15.4MB/s 0s 1 file(s) failed. 16-11-08 14:33:36.932 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:33:36.988 [WARNING] [acd_cli] - Return value error code: 256. Attempt 2 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 15.4MB/s 0s 1 file(s) failed. 16-11-08 14:33:36.932 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:33:36.988 [WARNING] [acd_cli] - Return value error code: 256. Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 11.4MB/s 0s 1 file(s) failed. 16-11-08 14:34:24.131 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:34:24.281 [WARNING] [acd_cli] - Return value error code: 256. Attempt 3 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 11.4MB/s 0s 1 file(s) failed. 16-11-08 14:34:24.131 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:34:24.281 [WARNING] [acd_cli] - Return value error code: 256. Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 11.9MB/s 0s 1 file(s) failed. 16-11-08 14:35:17.100 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:35:17.363 [WARNING] [acd_cli] - Return value error code: 256. Attempt 4 failed. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 11.9MB/s 0s 1 file(s) failed. 16-11-08 14:35:17.100 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:35:17.363 [WARNING] [acd_cli] - Return value error code: 256. Writing duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg Reading results of 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'' Backtrace of previous error: Traceback (innermost last): File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 522, in put self.__do_put(source_path, remote_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 508, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/acdclibackend.py"", line 94, in _put l = self.subprocess_popen(commandline) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 492, in subprocess_popen (private, result, stdout + '\n' + stderr)) BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 9.8MB/s 0s 1 file(s) failed. 16-11-08 14:36:07.875 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:36:07.967 [WARNING] [acd_cli] - Return value error code: 256. Giving up after 5 attempts. BackendException: Error running 'acd_cli upload --force --overwrite '/tmp/duplicity-9DdCo1-tempdir/duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg' '/Backups/xenon/duplicity/'': returned 1, with output: [#########################] 100.0% of 201MiB 1/1 9.8MB/s 0s 1 file(s) failed. 16-11-08 14:36:07.875 [ERROR] [acd_cli] - Uploading ""duplicity- inc.20161101T221724Z.to.20161108T061533Z.vol466.difftar.gpg"" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again. 16-11-08 14:36:07.967 [WARNING] [acd_cli] - Return value error code: 256. Releasing lockfile /root/backups/duply-cache/duply_acd/lockfile.lock Removing still remembered temporary file /tmp/duplicity-9DdCo1-tempdir/mkstemp-aXYQsN-1 Removing still remembered temporary file /tmp/duplicity-9DdCo1-tempdir/mktemp-OIkbZn-253 14:36:08.210 Task 'BKP' failed with exit code '50'. --- Finished state FAILED 'code 50' at 14:36:08.210 - Runtime 03:10:54.896 --- --- Start running command POST at 14:36:08.233 --- Running '/etc/duply/acd/post' - OK Output: Delete subvolume (commit): '/mnt/btrfs/libvirt- images@DuplicityBackup' Delete subvolume (commit): '/mnt/btrfs/BorgBackup@DuplicityBackup' --- Finished state OK at 14:36:10.059 - Runtime 00:00:01.825 --- ```",14 118022633,2016-10-24 16:48:53.512,"The ""restore-missing"" functionality is not working as expected (lp:#1636255)","[Original report](https://bugs.launchpad.net/bugs/1636255) created by **Vej (vej)** ``` I have found out that the ""--restore-missing"" functionality is not working as expected. I have set up déja-dup to backup some folders every day. If I delete some files today, and then I use the ""restore-missing"" functionality the same day (today in my example), déja-dup shows me the list of missing files as expected. The problem is that if I use the ""restore-missing"" functionality from tomorrow on, no files are listed anymore. It seems that ""restore-missing"" seeks the missing files only if the backup is made the same day that I deleted the files. The behavior I expected is that ""restore-missing"" should search the entire backup series, and create a list of all missing files since the first backup was made. ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: deja-dup 34.2-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-45.66-generic 4.4.21 Uname: Linux 4.4.0-45-generic x86_64 ApportVersion: 2.20.1-0ubuntu2.1 Architecture: amd64 CurrentDesktop: Unity Date: Mon Oct 24 18:09:40 2016 InstallationDate: Installed on 2014-04-23 (914 days ago) InstallationMedia: Ubuntu 14.04 LTS ""Trusty Tahr"" - Release amd64 (20140417) SourcePackage: deja-dup UpgradeStatus: Upgraded to xenial on 2016-07-31 (85 days ago) ``` Original tags: amd64 apport-bug third-party-packages xenial",8 118021689,2016-10-23 09:53:47.192,collection-status par2+file:// crashes (lp:#1635942),"[Original report](https://bugs.launchpad.net/bugs/1635942) created by **David Grajal (dgrabla)** ``` Duplicity version 0.7.10 Python version 2.7.9 OS Distro and version Debian 8.6 Type of target filesystem: Linux I noticed the collection-status command is not able to detect/list complete chains if the par2+file:// schema is used. If a cleanup command is run, no complete chain is detected and the whole backup is flag (and deleted if a cleanup --force is run). The collection-status command has no trouble if the exact same backup was done with file:// instead. On a folder containing a backup made with par2+file:// collection-status cannot find the chain ``` > duplicity collection-status par2+file://. Local and Remote metadata are synchronized, no sync needed. Warning, found incomplete backup sets, probably left from aborted session Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /home/dgrabla/.cache/duplicity/e4858c724edc4a2aab4355784da9ba88 Found 0 secondary backup chains. No backup chains with active signatures found Also found 0 backup sets not part of any chain, and 1 incomplete backup set. These may be deleted by running duplicity with the ""cleanup"" command. ``` If the backup was done with transport file:// (without par2) the collection-status is able to find complete chains: ``` > duplicity collection-status file://. Synchronizing remote metadata to local cache... Last full backup date: Sun Oct 23 03:25:54 2016 Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /home/dgrabla/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364 Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sun Oct 23 03:25:54 2016 Chain end time: Sun Oct 23 10:26:12 2016 Number of contained backup sets: 7 Total number of contained volumes: 103 Type of backup set: Time: Num volumes: Full Sun Oct 23 03:25:54 2016 97 Incremental Sun Oct 23 05:25:52 2016 1 Incremental Sun Oct 23 06:25:50 2016 1 Incremental Sun Oct 23 07:25:54 2016 1 Incremental Sun Oct 23 08:25:57 2016 1 Incremental Sun Oct 23 09:26:02 2016 1 Incremental Sun Oct 23 10:26:12 2016 1 ------------------------- No orphaned or incomplete backup sets found. ``` If on the folder with the backup made using transport par2+file:// we try to do the collection status with file:// (omiting the par2 part) duplicity crashes ``` > duplicity collection-status file://. Local and Remote metadata are synchronized, no sync needed. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1553, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1547, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1423, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 836, in get_backup_chains add_to_sets(f) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 824, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({4: 'duplicity- full.20161023T082638Z.vol4.difftar.gpg.par2'}, 'duplicity- full.20161023T082638Z.vol4.difftar.gpg.vol000+200.par2') ``` ``` Original tags: 0.7.10 collection-status",6 118021686,2016-10-12 21:05:39.125,KeyError while running verify (lp:#1632858),"[Original report](https://bugs.launchpad.net/bugs/1632858) created by **Pastafarianist (pastafarianist)** ``` WebDAV GET /path/to/backup.vol579.difftar.gpg request with headers: {'Connection': 'keep-alive', 'Authorization': 'Basic '} WebDAV data length: 4 WebDAV response status 200 with reason 'OK'. Deleting /tmp/duplicity-7CltV8-tempdir/mktemp-M2iDvp-592 Processed volume 585 of 4543 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1530, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1524, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1378, in main do_backup(action) File ""/usr/bin/duplicity"", line 1457, in do_backup verify(col_stats) File ""/usr/bin/duplicity"", line 860, in verify for backup_ropath, current_path in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 276, in collate2iters relem1 = riter1.next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 516, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 388, in yield_tuples setrorps(overflow, elems) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 377, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 127, in difftar2path_iter multivol_fileobj.close() # aborting in middle of multivol File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 248, in close if not self.addtobuffer(): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 236, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 343, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 332, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 759, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 580 duplicity 0.7.03 Python 2.7.9 Debian Jessie Target filesystem: WebDAV ```",20 118021682,2016-10-11 21:12:44.866,Backup on Backblaze B2 don't work on CentoS 6.8 (error : UnsupportedBackendScheme) (lp:#1632475),"[Original report](https://bugs.launchpad.net/bugs/1632475) created by **Gilbert Reims (gilbert721)** ``` Hi, As I see on internet, if I want to use the last version of Duplicity with a VPS server on CentOS 6.8 (or 6.x) I need to install it manually with all dependencies. I follow this tutorial : http://blog.helge.net/2014/02/how-to-install-duplicity-on-centos.html or http://clients.microtronix- tech.com/knowledgebase.php?action=displayarticle&id=5 It works and Duplicity works fine with FTP backup, and seems to work with all functions. If I try to use Duplicity with backblaze 2, I have this error message when I launch a .sh script : UnsupportedBackendScheme: scheme not supported in url: b2://xxxxx:xxxxx@xxxx/xxxx/ But on another VPS with CentOS Linux release 7.2.1511, it works with B2. On the 2 servers I have Duplicity 0.7.10. Do you have an idea of the problem ? Maybe I need to add another dependencies ? Thank you in advance for your answers, Regards, Gilbert Sorry if i need to ask question instead report a bug ! ```",6 118021679,2016-10-07 18:21:45.233,Cross-device link error in LocalBackend (lp:#1631472),"[Original report](https://bugs.launchpad.net/bugs/1631472) created by **Igor (ertong)** ``` duplicity 0.7.10 (August 20, 2016) Linux backup 3.13.0-46-generic #79-Ubuntu SMP Tue Mar 10 20:06:50 UTC 2015 x86_64 x86_64 /usr/bin/python2 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] When we use duplicity with LocalBackend and backup folder is mounted to different drive than archivedir, duplicity fails to copy signatures to backup folder. In order to find the problem, 1. I've added some messages in localbackend.py: http://take.ms/QMCMU 2. Run duplicity with verbose output and LocalBackend file:///... pointing to directory on dedicated hdd 3. I receive output like http://take.ms/92JW5 , where we clearly see ""[Errno 18] Invalid cross-device link"" error message 4. My backup have no signature files If I explicitly set --archive-dir and --tempdir to the same hard drive as backup directory, everything works fine. If I use different backend, everything works fine. ```",6 118021673,2016-10-07 15:32:49.321,Restarts fail because comparing long key ID to short key ID (lp:#1631414),"[Original report](https://bugs.launchpad.net/bugs/1631414) created by **Dan Watkins (oddbloke)** ``` Using duplicity 0.7.10 on Ubuntu yakkety, when a restart is attempted, I see a failure because the 8 character key ID is being compared against the 16 character key ID. See the relevant part of my log below: --- Start running command BKP at 16:24:00.197 --- Reading globbing filelist /exclude Synchronising remote metadata to local cache... Copying duplicity-inc.20161006T075558Z.to.20161007T091252Z.manifest.gpg to local cache. Copying duplicity-new- signatures.20161005T135758Z.to.20161006T075558Z.sigtar.gpg to local cache. Copying duplicity-new- signatures.20161006T075558Z.to.20161007T091252Z.sigtar.gpg to local cache. Last full backup left a partial set, restarting. Last full backup date: Fri Oct 7 15:19:21 2016 Reuse configured PASSPHRASE as SIGN_PASSPHRASE RESTART: Volumes 3 to 3 failed to upload before termination. Restarting backup at volume 3. Volume was signed by key 52DA9A50, not 4F07B22452DA9A50 16:25:53.466 Task 'BKP' failed with exit code '22'. --- Finished state FAILED 'code 22' at 16:25:53.466 - Runtime 00:01:53.269 --- ```",20 118021672,2016-10-03 20:26:34.755,Cloud authentication isn't renegotiated on long backups (lp:#1630002),"[Original report](https://bugs.launchpad.net/bugs/1630002) created by **Simon Greenwood (sfgreenwood-gmail)** ``` Version 0.7.10 Python 2.7.9 Debian 8.6 (jessie) Tested backends: Backblaze, Hubic I have a couple of large backups of around 300GB and 500GB that I am trying to create on cloud storage. I have tested with Backblaze B2 and Hubic as both seem to offer good value for money and as they work with duplicity. Both have token based authentication that times out after a period - on Backblaze it seems to be 12 hours and on Hubic it seems to be 48. Neither backend renegotiates once it starts failing. I would imagine that this applies to other cloud services. A common issue is that negotiation occurs in the init function and isn't tried again once the cloud service rejects the connection when the authentication period times out. A fix for this should be that the negotiation function is separated out and then can be called when the service rejects the connection, usually with a 403 error. I have started work on this for Backblaze. However, it also seems, certainly with the Backblaze backend, that resuming a failed backup doesn't initialise the connection and that the authentication token isn't renegotiated so it may also be the same with other services. ```",6 118021669,2016-09-30 00:43:55.338,Please support running backup as root but dropping privilege for gpg/network/etc (lp:#1629151),"[Original report](https://bugs.launchpad.net/bugs/1629151) created by **Josh Triplett (joshtriplett)** ``` In order to back up files owned by multiple users, I need to start duplicity as root. However, I don't want to run gpg, ssh, or a network connection to a storage service as root. Please consider offering a duplicity option to use root privileges to read the data to back up (or to write out restored data with restored ownership and permissions), but to drop privileges for all other operations (gpg, ssh, network connections, anything other than reading and writing the data to backup/restore). ```",8 118021666,2016-09-18 12:32:13.845,No incremental when backing up to Hubic (lp:#1624865),"[Original report](https://bugs.launchpad.net/bugs/1624865) created by **David (davfrombe)** ``` Hi, Using duplicity on Ubuntu 16.04, I can successfully backup on Hubic but only full backups are created. Every successive run creates a new full backup. duplicity collection-status returns ""Last full backup date: none"" Ho do I troubleshoot this so I can give you guys more details/log ? Duplicity version : duplicity 0.7.01 Python version : Python 2.7.9 OS Distro and version : Ubuntu server 16.04 LTS Type of target filesystem: Hubic cloud storage Command line : duplicity -v9 ~/test.txt cf+hubic://backup/ Regards, David ```",8 118021665,2016-09-10 10:43:10.771,Corrupted local cached file causes AssertionError (lp:#1622133),"[Original report](https://bugs.launchpad.net/bugs/1622133) created by **Phill (phill.l)** ``` After the following error (shown here with just a status command but happens with backups too), duplicity never recovers and reports the same error every time it is run: $ duplicity collection-status ***REDACTED*** Local and Remote metadata are synchronized, no sync needed. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1532, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1526, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1380, in main do_backup(action) File ""/usr/bin/duplicity"", line 1405, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 835, in get_backup_chains add_to_sets(f) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 823, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 101, in add_filename self.set_manifest(filename) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 136, in set_manifest remote_filename) AssertionError: ('duplicity- inc.20160910T030003Z.to.20160910T040004Z.manifest.part', 'duplicity- inc.20160910T030003Z.to.20160910T040004Z.manifest.gpg') I deleted the "".part"" file from the local cache and duplicity worked from there after. The "".gpg"" file did not exist. I'm reporting this as a bug because I think it is reasonable to expect programs to manage their own cache remove any cached content when it detects a problem. Unfortunately, I didn't take a copy of the offending file so I'm unable to assist in what the corruption was. However, after deleting the file and running the backup, no such file exists in the cache. $ duplicity --version duplicity 0.7.06 $ sudo apt list | grep duplicity duplicity/xenial,now 0.7.06-2ubuntu2 amd64 [installed] $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION=""Ubuntu 16.04.1 LTS"" ```",6 118019010,2016-09-10 02:50:38.052,gio backend: please don't run dbus-launch unconditionally (lp:#1622074),"[Original report](https://bugs.launchpad.net/bugs/1622074) created by **az (az-debian)** ``` this is a forward of debian bug #836088, which lives over there: http://bugs.debian.org/836088 the original reporter asks for improvements in the gio/dbus interaction logic, namely this: --- As described in I'm trying to reduce how much dbus-launch is used in Debian. duplicity's gio backend currently runs dbus-launch if DBUS_SESSION_BUS_ADDRESS is unset. This is only a minor bug in Debian, because Debian's dbus packaging is designed to ensure that either DBUS_SESSION_BUS_ADDRESS is set for normal sessions, or dbus-launch isn't available anyway. However, it could cause problems in edge cases and is probably worth fixing upstream. One issue with the current code is that it second-guesses how dbus itself will find the session bus. In recent libdbus and GDBus, the fallback behaviour if DBUS_SESSION_BUS_ADDRESS is unset is to look for $XDG_RUNTIME_DIR/bus: if the environment variable is set, that directory contains ./bus, and ./bus is a socket owned by the current uid, then libdbus and GDBus will automatically use it. In particular, the dbus-user-session Debian package sets up this situation. duplicity should not run dbus-launch without first looking for that socket. Another issue with the approach duplicity has taken is that it relies on dbus-launch, which is X11-specific legacy code that does several things, none of them particularly well. This Flatpak commit illustrates how eval `dbus-launch` can be replaced by invoking dbus-daemon directly, avoiding the X11-specific dbus-launch: . It is possible to send both the address and the pid to stdout (""--print-address=1 --print-pid=1"") if that would be easier to do from Python; their order is undocumented but predictable, and I don't intend to break it in future D-Bus releases. Alternatively, and perhaps most simply, duplicity could stop trying to compensate for a missing session bus address at all. On systems with $XDG_RUNTIME_DIR/bus, it would ""just work"" anyway; or when run under X11 (even with no dbus-daemon running), X11 autolaunching would create a dbus-daemon anyway; or if duplicity is being run in a non-GUI environment, for example from cron, its documentation could mention that use of the gio backend sometimes requires using dbus-run-session -- duplicity [ARGS...] which has been available since dbus 1.8, and automatically cleans up the dbus-daemon after duplicity terminates (successfully or not). --- ```",6 118021659,2016-08-31 15:49:29.179,SIGINT sometimes ignored when writing to AWS S3 backends (lp:#1618951),"[Original report](https://bugs.launchpad.net/bugs/1618951) created by **Saj Goonatilleke (saj-r)** ``` While not documented in the duplicity(1) manual, signalling duplicity with SIGINT _should_ initiate a graceful shutdown: https://lists.nongnu.org/archive/html/duplicity-talk/2014-10/msg00014.html http://bazaar.launchpad.net/~duplicity- team/duplicity/0.7-series/view/1240/bin/duplicity#L1558 Unfortunately, due in part to buggy third-party libraries, this does not always work. Steps to reproduce: 1. Configure duplicity to back up to an AWS S3 backend. 2. Using whatever method you prefer, set up a filter to drop network traffic from duplicity to its backend. (So as to simulate a partial network failure.) 3. Invoke 'duplicity collection-status' against the AWS S3 backend. duplicity should stall here. 4. Signal the duplicity process with SIGINT. Expected behaviour: duplicity removes ephemeral state and locks from the local filesystem then self-terminates with non-zero exit status. Observed behaviour: duplicity fails to terminate. Environment: - duplicity 0.7.09 - boto 2.41.0 - Python 2.7.5 - Linux (EL7) Analysis: duplicity expects KeyboardInterrupt to propagate up the stack once the Python runtime is signalled with SIGINT. If, at the point the SIGINT is handled by the Python runtime, duplicity is busy executing boto library routines, the KeyboardInterrupt may be gobbled by a greedy 'except' clause: https://github.com/boto/boto/blob/abb38474ee5124bb571da0c42be67cd27c47094f/boto/s3/connection.py#L574-L577 In this case, duplicity will never see the KeyboardInterrupt. A boto bug was filed for the catch-all in May of 2014: https://github.com/boto/boto/issues/2262 (This is a particularly egregous catch-all. 'except Exception:' would have been enough to avoid this particular problem on CPython >= 2.5.) I have not scanned other third-party imports to check whether this problem affects more backend types. As a workaround for this problem, our process supervisor was modified to signal duplicity with SIGTERM then perform its own cleanup as best as it could. This arrangement is fragile: what we assume of the current duplicity release may not hold in the future (e.g.: lock file paths). In an effort to guarantee behaviour, would it be possible to install a custom handler for SIGINT/SIGTERM? This handler would be immune to third-party blunders, though I imagine it would complicate the code. ```",6 118021658,2016-08-24 22:50:30.228,OneDrive: Often asks for reauth (lp:#1616664),"[Original report](https://bugs.launchpad.net/bugs/1616664) created by **Sven (n-ubuntu-one)** ``` First of all thanks for the great software and thanks for the OneDrive plugin. OneDrive support is quite rare (especially if one wants to access it using a Linux server). root@home ~/backups # duplicity --version duplicity 0.7.09 root@home ~/backups # uname -a Linux home 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3~bpo70+1 (2016-07-07) x86_64 GNU/Linux root@home ~/backups # lsb_release -a Distributor ID: Debian Description: Debian GNU/Linux 7.11 (wheezy) Release: 7.11 Codename: wheezy root@home ~/backups # python --version Python 2.7.3 I get this message from time to time (a few times per day): > In order to authorize duplicity to access your OneDrive, please open [url] in a browser and copy the URL of the blank page the dialog leads to. The thing is every time I can (/need to) just hit enter without inputting anything and the upload continues. Even after quitting and letting duplicity continue by restarting it the upload just works fine again without interaction. The main problem here is that one has to check on a regular basis whether such a request blocks the upload progress (In my case the upload is slow and the whole process takes days, sometimes upto weeks and this costs me a whole day without any progress if I don't have a look at it often enough). It would be very helpful if this could get fixed. Another message that's comes up often (much more often) is this one: > Attempt 1 failed. Error: [] It doesn't has negative consequences and doesn't require manual interaction, but I'd at least expect it to show an error code instead of an empty list/array/object. ``` Original tags: onedrive",6 118023079,2016-08-18 16:12:37.045,Sauvegarde (lp:#1614607),"[Original report](https://bugs.launchpad.net/bugs/1614607) created by **Gava Daniel (gava-daniel)** ``` Bonjour, Deja Dup ,fonctionnait très bien jusqu'à ce matin.Plusieurs tentatives n'ont fait que confirmé l'echec pour une raison inconnue.Je vous fait suivre le rapport de Deja Dup.Je suis sous Ubuntu 16.04 depuis peu.file:///home/gava/Images/Capture%20du%202016-08-18%2015-29-25.png /home/gava/Images/Capture du 2016-08-18 15-29-25.png ```",6 118018990,2016-08-07 11:37:29.442,"Incremental backup prevents ""restore-missing"" from restoring files, that were deleted before (lp:#1610667)","[Original report](https://bugs.launchpad.net/bugs/1610667) created by **Vej (vej)** ``` If I do the following, I get an empty list. 1. Perform a fresh backup to a new folder (set a password for encryption). 2. Create the folder test and the file test/testfile.txt. 3. Restart the Backup (perform an incremental Backup) 4. Delete the file. 5. Restart the Backup (perform an incremental Backup) 6. Try to restore the file using LC_ALL=C deja-dup --restore-missing test/ Result: The list is empty. I expect to see the file testfile.txt in the list, so I can restore it using the backup created in Step 3. Some other users of the German community verified this bug (see https://forum.ubuntuusers.de/topic/fragen-zu-d-j-dup/ for an example - in German). I already mentioned this in bug #1377873, but that one is marked as ""Fix Released"", so ML suggested to open a new one. Ubuntu 16.04.1 LTS deja-dup 34.2-0ubuntu1 duplicity 0.7.06-2ubuntu2 gsettings list-recursively org.gnome.DejaDup: org.gnome.DejaDup last-restore '2015-11-01T21:37:49.910024Z' org.gnome.DejaDup periodic true org.gnome.DejaDup full-backup-period 75 org.gnome.DejaDup backend 'file' org.gnome.DejaDup last-run '2016-08-07T10:21:31.709004Z' org.gnome.DejaDup nag-check '2016-07-31T21:55:33.643340Z' org.gnome.DejaDup prompt-check '2013-01-27T20:24:01.762209Z' org.gnome.DejaDup root-prompt true org.gnome.DejaDup include-list org.gnome.DejaDup exclude-list org.gnome.DejaDup last-backup '2016-08-07T10:21:31.709004Z' org.gnome.DejaDup periodic-period 1 org.gnome.DejaDup delete-after 730 [...] org.gnome.DejaDup.File path 'file:///media/user/extern/Deja-Backup' org.gnome.DejaDup.File short-name 'extern' org.gnome.DejaDup.File uuid org.gnome.DejaDup.File icon '. GThemedIcon drive-harddisk-usb drive- harddisk drive' org.gnome.DejaDup.File relpath b'Desktop/extern' org.gnome.DejaDup.File name 'Vendor External USB 3.0: extern' org.gnome.DejaDup.File type 'volume' ``` Original tags: restore",22 118021655,2016-08-04 20:29:38.186,"par2+file does not work on mounted volumes on OSX: Warning, found signatures but no corresponding backup files (lp:#1609966)","[Original report](https://bugs.launchpad.net/bugs/1609966) created by **Lars Volker (lv)** ``` Duplicity 0.7.06 (installed with homebrew) Python 2.7.10 OSX 10.11.6 Target filesystem: par2+file to /dev/disk5s1 on /Volumes/DupTest (hfs, local, nodev, nosuid, journaled, noowners, mounted by lv) A full backup works with file:// to both the root disk (hfs, local, journaled) and to /Volumes/DupTest. With par2+file:// it works to the root disk, but *not* to /Volumes/DupTest. It seems to be the specific combination of par2, file and mounted Volume. It doesn't matter where the data resides that I try to backup, only the target filesystem seems to make a difference. Here is the log of a failed run: ✔ ~/tmp/duptest$ find test test test/1 ✔ ~/tmp/duptest$ duplicity full -v9 test par2+file:///Volumes/DupTest/backup Using archive dir: /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6 Using backup name: 5a09dcb3de2ecb2901ecf1c600926ad6 Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: full ================================================================================ duplicity 0.7.06 (December 07, 2015) Args: /usr/local/Cellar/duplicity/0.7.06_1/libexec/bin/duplicity full -v9 test par2+file:///Volumes/DupTest/backup Darwin MacBook-Pro.local 15.6.0 Darwin Kernel Version 15.6.0: Thu Jun 23 18:25:34 PDT 2016; root:xnu-3248.60.10~1/RELEASE_X86_64 x86_64 i386 /usr/bin/python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] ================================================================================ Using temporary directory /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz-tempdir Registering (mkstemp) temporary file /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/mkstemp-gPCEWY-1 Temp has 65250316288 available, backup will use approx 34078720. Synchronizing remote metadata to local cache... Deleting local /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity-full- signatures.20160804T185221Z.sigtar.gpg (not authoritative at backend). Deleting local /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- full.20160804T185221Z.manifest (not authoritative at backend). 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: PASSPHRASE variable not set, asking user. Retype passphrase to confirm: Using temporary directory /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- fuEWCa-tempdir Registering (mktemp) temporary file /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- fuEWCa-tempdir/mktemp-AeSjBd-1 Using temporary directory /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- _cSMKl-tempdir Registering (mktemp) temporary file /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- _cSMKl-tempdir/mktemp-7lXd53-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/mktemp-hL4hWy-2 Selecting test Comparing . and None Getting delta of (. dir) and None A . Selection: examining path test/1 Selection: + no selection functions found. Including Selecting test/1 Comparing 1 and None Getting delta of (1 reg) and None A 1 Removing still remembered temporary file /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- fuEWCa-tempdir/mktemp-AeSjBd-1 Removing still remembered temporary file /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity- _cSMKl-tempdir/mktemp-7lXd53-1 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity-full.20160804T202254Z.vol1.difftar.gpg Making directory /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1 Create Par2 recovery files Deleting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity-full.20160804T202254Z.vol1.difftar.gpg Deleting tree /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1 Selecting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1 Selection: examining path /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.par2 Selection: + no selection functions found. Including Selecting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.par2 Deleting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.par2 Selection: examining path /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.vol0+6.par2 Selection: + no selection functions found. Including Selecting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.vol0+6.par2 Deleting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1/duplicity- full.20160804T202254Z.vol1.difftar.gpg.vol0+6.par2 Deleting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/duplicity_temp.1 Deleting /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/mktemp-hL4hWy-2 Forgetting temporary file /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/mktemp-hL4hWy-2 AsyncScheduler: task completed successfully Processed volume 1 Making directory /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Create Par2 recovery files Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg Deleting tree /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Selection: examining path /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.par2 Selection: + no selection functions found. Including Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.par2 Selection: examining path /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.vol0+7.par2 Selection: + no selection functions found. Including Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.vol0+7.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full-signatures.20160804T202254Z.sigtar.gpg.vol0+7.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Making directory /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Create Par2 recovery files Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg Deleting tree /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 Selection: examining path /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.par2 Selection: + no selection functions found. Including Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.par2 Selection: examining path /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.vol0+5.par2 Selection: + no selection functions found. Including Selecting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.vol0+5.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1/duplicity- full.20160804T202254Z.manifest.gpg.vol0+5.par2 Deleting /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/duplicity_temp.1 1 file exists on backend 6 files exist in cache Extracting backup chains from list of files: [u'duplicity- full.20160804T202254Z.vol1.difftar.gpg'] File duplicity-full.20160804T202254Z.vol1.difftar.gpg is not part of a known set; creating new set Warning, found incomplete backup sets, probably left from aborted session --------------[ Backup Statistics ]-------------- StartTime 1470342175.75 (Thu Aug 4 21:22:55 2016) EndTime 1470342175.76 (Thu Aug 4 21:22:55 2016) ElapsedTime 0.01 (0.01 seconds) SourceFiles 2 SourceFileSize 107 (107 bytes) NewFiles 2 NewFileSize 107 (107 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 2 RawDeltaSize 5 (5 bytes) TotalDestinationSizeChange 227 (227 bytes) Errors 0 ------------------------------------------------- Releasing lockfile /Users/lv/.cache/duplicity/5a09dcb3de2ecb2901ecf1c600926ad6/lockfile.lock Removing still remembered temporary file /var/folders/pz/kczql79d54l1fbb9b9k11lch0000gp/T/duplicity-n2Wodz- tempdir/mkstemp-gPCEWY-1 ```",10 118021648,2016-07-24 11:28:29.448,b2 backend: dies with unicode error (lp:#1605985),"[Original report](https://bugs.launchpad.net/bugs/1605985) created by **az (az-debian)** ``` this is a forward of debian bug #830896, which lives over there: http://bugs.debian.org/830896 the original reporter ""created a Backblaze B2 Cloud Storage account and tried to use it with duplicity. However all duplicity commands using B2 fail for me with the same error messages: /usr/lib/python2.7/dist-packages/duplicity/backends/b2backend.py:211: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if bucket_name not in bucket_names: FatalBackendException: Bucket cannot be created"" ```",6 118021632,2016-07-21 08:58:11.501,Performance issue while handling big storages. (lp:#1605133),"[Original report](https://bugs.launchpad.net/bugs/1605133) created by **Grzegorz Żyszkiewicz (grzegorz-zyszkiewicz)** ``` There is a performance issue while handling big collection. By big I mean 10000+ files(3000 different full backups) already backed up by duplicity. This issue is caused by unnecessarily parsing filename in collections.BackupSet.add_filename. In function CollectionsStatus.get_backup_chains a set of BackupSet is created and each file is verified if fits any of already existing BackupSet by calling method collections.BackupSet.add_filename for each filename in list. This cause unnecessarily filename parsing for same file up to N times (where N is number of full backups). It can be avoided by parsing all filenames beforehand and passing already parsed filename along with raw filename into CollectionsStatus.get_backup_chains.add_to_sets and further to collections.BackupSet.add_filename. Same mechanism already exists within SignatureChain.add_filename. It decreased duplicity working time from 22 minutes to 8 minutes in my case. ```",6 118021611,2016-07-12 06:03:53.093,Full backup missing config files and hidden directories (lp:#1602111),"[Original report](https://bugs.launchpad.net/bugs/1602111) created by **Anes Lihovac (anes-lihovac-gmail)** ``` I have selected that my home directory gets backuped daily only excluding ~/Downloads. After an erase of my home and restoring everything, I am missing everything under ~/.local/share/* System information: duplicity 0.7.02 Python 2.7.10 Ubuntu 15.10 + encrypted home directory, the backup is a separate disc. ```",6 118021594,2016-07-06 09:08:46.288,Spelling error in the error message when passing a bad time string (lp:#1599433),"[Original report](https://bugs.launchpad.net/bugs/1599433) created by **Dan (dannyhajj)** ``` When passing a bad time string in the duplicity command, the following error message shows: > The acceptible time strings are intervals (like ""3D64s""), w3-datetime > strings, like ""2002-04-26T04:22:01-07:00"" (strings like > ""2002-04-26T04:22:01"" are also acceptable - duplicity will use the > current time zone), or ordinary dates like 2/4/1997 or 2001-04-23 > (various combinations are acceptable, but the month always precedes > the day). The word ""acceptible"" is misspelled and should be ""acceptable"" For example: duplicity remove-older-than 2016-0101 file:///home/user/deja- dup/ $ duplicity --version duplicity 0.7.06 $ python --version Python 2.7.11+ OS: Ubuntu 16.04 ``` Original tags: spelling",6 118021574,2016-06-29 07:45:13.751,IMAP corrupts chain and won't resume after stopping mid backup. (lp:#1597209),"[Original report](https://bugs.launchpad.net/bugs/1597209) created by **Richard Scott (2-launchpad-pointb-co-uk)** ``` When backing up a data set to an IMAP server that aborts part way through (either via CTRL+C, killing the process, an accidental reboot or a press of a reset button) the next time I try and use that target I am unable to do ANYTHING with the IMAP data store. *** No backup is usable, not even old full/inc chains that have previously uploaded fine *** I am using Duply to manage my backups, and the problem seems to be due to the manifest files not being uploaded to the target server until late on in the backup process... this causes the next run to abort as Duplicity gets really confused as it can't find any manifest files either remote or locally... even tho the local ones do exist and have not been deleted. I get this error: IMAP LIST: duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol114.difftar.gpg target IMAP LIST: duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol115.difftar.gpg target Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1544, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1538, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1392, in main     do_backup(action)   File ""/usr/bin/duplicity"", line 1417, in do_backup     globals.archive_dir).set_values()   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 710, in set_values     self.get_backup_chains(partials + backend_filename_list)   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 835, in get_backup_chains     add_to_sets(f)   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 823, in add_to_sets     if set.add_filename(filename):   File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 105, in add_filename     (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol1.difftar.gpg', 2: 'duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol2.difftar.gpg', 3: 'duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol3.difftar.gpg', 4: 'duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol4.difftar.gpg'}, 'duplicity- inc.20160628T132706Z.to.20160628T132842Z.vol4.difftar.gpg') 18:59:42.624 Task 'BKP' failed with exit code '30'. so far, the only fix I can find is to delete all the emails in that folder on the IMAP server and start again. ```",6 118021512,2016-06-24 07:15:47.547,Duplicity doesn't want create incremental backup (lp:#1595857),"[Original report](https://bugs.launchpad.net/bugs/1595857) created by **Андрей Калинин (prize2step)** ``` duplicity 0.7.07.1 (April 19, 2016) Python version: Python 2.7.11 OS: FreeBSD mercury.pkb.local 9.3-RELEASE FreeBSD 9.3-RELEASE #0 r268512: Fri Jul 11 03:13:02 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC i386 Running command: /usr/local/bin/duplicity -v9 --full-if-older-than 1M --volsize 250 /var/mail/exim ftp://*****@192.168.0.8/ftp_shares/BACKUP/mail ```",6 118019379,2016-06-19 05:16:30.375,Swift upload performance (lp:#1594068),"[Original report](https://bugs.launchpad.net/bugs/1594068) created by **Dinacel (o-admin-2)** ``` Using Duplicity with the Swift backend gives bad performances (at least with OVH PCS/Hubic), Swift client can send an object via multiple connection to maximise bandwidth usage. With 10 threads I can saturate a 100Mbps interface, but with the default (1), I can only send at an average rate of 10Mbps. http://docs.openstack.org/developer/python- swiftclient/swiftclient.html#module-swiftclient.multithreading ``` Original tags: swift",6 118021460,2016-06-15 12:33:59.951,Log entries should have optional timestamps (lp:#1592799),"[Original report](https://bugs.launchpad.net/bugs/1592799) created by **Markus (mstoll-de)** ``` with long running backups warning and error log entries often are related to network problems. To be able to later on research these problems, it would be nice to have timestamps with log entries (at least with errors and warnings) Markus ```",18 118022652,2016-06-09 18:54:22.517,Backup fails with unknown error (assertion failure in duplicity) (lp:#1590926),"[Original report](https://bugs.launchpad.net/bugs/1590926) created by **Wim Lewis (wiml)** ``` Deja Dup and/or duplicity is failing for me now. This might have been triggered by my being away from the machine when the SSH passphrase prompt came up, and the session timing out. Or that may be a red herring. The ""Backup Failed / Failed with an unknown error:"" window has this traceback in it: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1370, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 697, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 819, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 809, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 96, in add_filename self.set_manifest(filename) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 127, in set_manifest remote_filename) AssertionError: ('duplicity-full.20160527T063251Z.manifest.gpg', 'duplicity-full.20160527T063251Z.manifest') $ lsb_release -d Description: Ubuntu 14.04.4 LTS $ dpkg-query -W deja-dup duplicity deja-dup 30.0-0ubuntu4 duplicity 0.6.23-1ubuntu4.1 ```",6 118021435,2016-06-08 19:53:05.791,cleanup not working as expected (lp:#1590540),"[Original report](https://bugs.launchpad.net/bugs/1590540) created by **Markus (mstoll-de)** ``` Duplicity version 0.7.07.01 Python version 2.7 OS OSX 10.11.5 When using cleanup after an incomplete backup, from the documentation I expected, that cleanup does list the files to be removed from backend (and I would have to add option ""—force"" to actually delete the files). However, I observed that the incomplete backup is always deleted though ""—force"" option is not set. If cleaning up gpg encrypted backups, cleanup is always asking for the GnuGP passphrase at the end, though abviously the passphrase is not needed and not used. ```",6 118021410,2016-06-05 11:37:05.301,gpg randomly fails (lp:#1589226),"[Original report](https://bugs.launchpad.net/bugs/1589226) created by **John (johnniedoe)** ``` I have ca 20 GB folder I want to backup into s3. I used to do it without any problems on ubuntu 14.04/16.04 but after migrating to arch it seems impossible. I'm using duply and whole backup starts as usual, asks me for GPG password and starts uploading volumes (with 'success' messages) but after an hour or so every time I got this message (even though that at the very start of whole procedure duply/duplicity actually tests whether or not it can encrypt, sign and decrypt so I don't get why it can't after uploading over 150 volumes..). It's not connected with any specific volume/folder/file beeing backed up. [...] Registering (mktemp) temporary file /tmp/duplicity-fCBKtp-tempdir/mktemp- Mste0K-168 AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity-full.20160604T205531Z.vol167.difftar.gpg Uploading s3://s3-eu- west-1.amazonaws.com/johnniedoe_bucket/backup/duplicity- full.20160604T205531Z.vol167.difftar.gpg to STANDARD Storage Uploaded s3://s3-eu- west-1.amazonaws.com/johnniedoe_bucket/backup/duplicity- full.20160604T205531Z.vol167.difftar.gpg to STANDARD Storage at roughly 719776.112915 bytes/second Deleting /tmp/duplicity-fCBKtp-tempdir/mktemp-Mste0K-168 Forgetting temporary file /tmp/duplicity-fCBKtp-tempdir/mktemp-Mste0K-168 AsyncScheduler: task completed successfully Processed volume 167 Registering (mktemp) temporary file /tmp/duplicity-fCBKtp-tempdir/mktemp- Vh1Nwa-169 Releasing lockfile /home/johnniedoe/.cache/duplicity/duply_s3/lockfile.lock Removing still remembered temporary file /tmp/duplicity-fCBKtp- tempdir/mktemp-Vh1Nwa-169 Removing still remembered temporary file /tmp/duplicity-fCBKtp- tempdir/mkstemp-O2x7AF-1 GPG error detail: Traceback (most recent call last):   File ""/usr/bin/duplicity"", line 1537, in     with_tempdir(main)   File ""/usr/bin/duplicity"", line 1531, in with_tempdir     fn()   File ""/usr/bin/duplicity"", line 1385, in main     do_backup(action)   File ""/usr/bin/duplicity"", line 1506, in do_backup     full_backup(col_stats)   File ""/usr/bin/duplicity"", line 572, in full_backup     globals.backend)   File ""/usr/bin/duplicity"", line 430, in write_multivol     at_end = gpg.GPGWriteFile(tarblock_iter, tdp.name, globals.gpg_profile, globals.volsize)   File ""/usr/lib/python2.7/site-packages/duplicity/gpg.py"", line 356, in GPGWriteFile     file.close()   File ""/usr/lib/python2.7/site-packages/duplicity/gpg.py"", line 241, in close     self.gpg_failed()   File ""/usr/lib/python2.7/site-packages/duplicity/gpg.py"", line 226, in gpg_failed     raise GPGError(msg) GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: using "">MYKEYID<"" as default secret key for signing gpg: signing failed: Operation cancelled gpg: [stdin]: sign+encrypt failed: Operation cancelled ===== End GnuPG log ===== GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: using ""FE6BA5E6"" as default secret key for signing gpg: signing failed: Operation cancelled gpg: [stdin]: sign+encrypt failed: Operation cancelled ===== End GnuPG log ===== 00:56:39.570 Task 'BKP' failed with exit code '31'. --- Finished state FAILED 'code 31' at 00:56:39.570 - Runtime 02:01:10.105 --- --- Start running command POST at 00:56:39.590 --- Skipping n/a script '/home/johnniedoe/.duply/s3/post'. --- Finished state OK at 00:56:39.608 - Runtime 00:00:00.017 --- Using installed duplicity version 0.7.07.1, python 2.7.11, gpg 2.1.12 (Home: ~/.gnupg), awk 'GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4-p1, GNU MP 6.1.0)', grep 'grep (GNU grep) 2.25', bash '4.3.42(1)-release (x86_64-unknown-linux-gnu)'. Linux 4.5.4-1-ARCH #1 SMP PREEMPT Wed May 11 22:21:28 CEST 2016 x86_64 GNU/Linux ```",6 118021378,2016-05-26 13:02:30.156,Document glob behavior with trailing slash (lp:#1586032),"[Original report](https://bugs.launchpad.net/bugs/1586032) created by **René 'Necoro' Neumann (necoro)** ``` With bug #1479545, globbing was changed (or restored) to behave differently, when the pattern has a trailing /. This fix silently broke my backup, as I had given --include /var/vmail/ --exclude '**', thus removing all emails from the backup. This line had worked for quite some time... While I understand the rationale behind the change in the aforementioned bug, it breaks the general assumption that /some/dir and /some/dir/ are equivalent (i.e. when no globbing is involved). At the very least, it should be documented prominently! Currently it is not mentioned at all in the man-page... ```",10 118021357,2016-05-13 12:48:20.491,Duplicity looking for wrong file (lp:#1581508),"[Original report](https://bugs.launchpad.net/bugs/1581508) created by **Florent B (florent-z)** ``` Hi, I use Duplicity 0.7.07.1 on Debian Wheezy. Backups (full & incremental) are running fine for a few days, then suddenly Duplicity failed to backup returning this error : Copying duplicity-new- signatures.20160504T132402Z.to.20160504T142402Z.sigtar. to local cache. Attempt 1 failed. BackendException: scp get duplicity-new- signatures.20160504T132402Z.to.20160504T142402Z.sigtar. failed: incorrect response '#001scp: cephfs/project-api3/api//duplicity-new- signatures.20160504T132402Z.to.20160504T142402Z.sigtar.: No such file or directory' ... With 5 attempts. As you can see, Duplicity is looking for ""duplicity-new- signatures.20160504T132402Z.to.20160504T142402Z.sigtar."" file, it's missing ""gpg"" at the end of filename ! What could be the origin of this ""bug"" ? Thank you. ```",6 118019209,2016-04-28 19:51:20.541,Slow exclude pattern matching (lp:#1576389),"[Original report](https://bugs.launchpad.net/bugs/1576389) created by **Arthur Peters (amp)** ``` Large exclude files cause the duplicity to scan files very slowly. My suspicion is that the time is spent checking if each file matches the exclude list. The problem is particularly noticable if the exclude files includes the full prefix of every file. For example: /home/user/data/something/excluded.dat would be slower than **/something/excluded.dat I collected some time data and it appears that the time spent is increasing rapidly, but linearly in the number of exclude patterns. For example, with 0 patterns a backup of ~1700 file took 14s, with 40 patterns it took 72s. Based on this and the rest of the data on this backup set every pattern adds about 1.5s to the time taken. This was tested with prefixes of the attached ""excludeFromBackup-HDD.lst"". Removing the long prefixes and replacing them with ** shows a significant improvement, but still shows noticeable slowdown. In this case the slowdown was 0.1s per pattern. However I suspect the difference would increase with more input files. This was tested with the attached ""excludeFromBackup- HDD2.lst"". I had better performance in 0.7.03. This is a bit of a problem for me since my actual backups use just over 1800 patterns (generated from makefile clean rules among other things). I have also attached a file ""dupl.log"" with the first and last 200 lines. The command line for this run was: time duplicity full -v9 --exclude-filelist excludeFromBackup-HDD.lst $PWD/JunkFromKitteh/ file://$HOME/tmp/tempbackup/ $PWD/JunkFromKitteh/ is the ~1700 files I tested with above. All testing was on: Ubuntu 15.10 duplicity 0.7.07.1 (from PPA) Python 2.7.10 Target FS: local btrfs ``` Original tags: performance regression",28 118018981,2016-04-28 06:26:52.428,Duplicity cannot handle sparse files efficiently (lp:#1576051),"[Original report](https://bugs.launchpad.net/bugs/1576051) created by **Sven Mueller (smu-u)** ``` When running backup with duplicity, it cannot handle sparse files efficiently. I have a virtual machine image on my home dir, 26G -rw-r--r-- 1 libvirt-qemu kvm 33G huhti 28 08:59 dev02.img And duplicity is backing it up 33G, when it should only back up 26G. Steps to reproduce 1. Use qemu-image to create a sparse file 2. Run duplicity 3. Observe duplicity back it up whole instead of very small size ```",22 118021329,2016-04-18 06:56:54.708,Duplicity 0.7.07 completely fails to upload to Backblaze B2 backend (lp:#1571514),"[Original report](https://bugs.launchpad.net/bugs/1571514) created by **Lee W (z-ubunguone-z)** ``` Duplicity version - 0.7.07 Python version - 2.7.6 OS Distro and version - Ubuntu 14.04.04 LTS Type of target filesystem: Linux, backing up to b2:// Log output from -v9 option - /usr/local/bin/duplicity -v9 --full-if-older- than 1M --encrypt-key --gpg-options=--always-trust --volsize 512 --asynchronous-upload /var/backups/test b2://:@test1234-fl/test The b2 acct-ID and key have been verified as working via the Backblaze 'b2' CLI tool and 'b2 authorize_account'. Per the -v9 output, the request to https://api001.backblaze.com/b2api/v1/b2_list_file_names returns with status 200, but a request to https://api001.backblaze.com/b2api/v1/b2_get_upload_url returns an HTTP 403 Forbidden. Full -v9 output is here: http://www.fpaste.org/356908/09617151/ ``` Original tags: b2 backblaze",10 118021300,2016-04-16 15:23:34.297,encountering locked excluded files wrongly considered as an error (lp:#1571204),"[Original report](https://bugs.launchpad.net/bugs/1571204) created by **mkoniecz (matkoniecz)** ``` --exclude shell_pattern is documented as ""Exclude the file or files matched by shell_pattern. If a directory is matched, then files under that directory will also be matched. See the FILE SELECTION section for more information."" but duplicity / --include '/home/user1' --exclude '**' file:///media/mateusz/Backup/user1_backup behaves in unexpected way with two errors reported: Error accessing possibly locked file /lost+found Error accessing possibly locked file /root Given that these folders are matched by shell_pattern it should not matter whatever duplicity may potentially access them - testing whatever file is inaccessible and complaining should be done only for files supposed to be included in backup. This weird --include --exclude pattern was suggested in http://askubuntu.com/questions/757358/restore-duplicity-archive-with- entire-path-remembered/757608#757608 as solution for a different problem duplicity --version duplicity 0.6.23 python --version Python 2.7.6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.4 LTS Release: 14.04 Codename: trusty ext4 to ext4, the same partition ```",14 118021276,2016-04-13 19:27:56.223,"unclear documentation in section ""Known Issues / Bugs"" (lp:#1570069)","[Original report](https://bugs.launchpad.net/bugs/1570069) created by **mkoniecz (matkoniecz)** ``` ""Bad signatures will be treated as empty instead of logging appropriate error message."" - what does it mean? From other parts of documentation I am guessing that with incremental backup in case where Duplicity produced metadata was corrupted user will not be notified and Duplicity will silently produce full backup. But I am not 100% sure that my understanding is correct. I think that it would be preferable to make ""Known Issues / Bugs"" section clear, also for newcomers. ```",6 118021253,2016-04-13 19:12:40.648,broken formatting at http://duplicity.nongnu.org/duplicity.1.html#toc (lp:#1570067),"[Original report](https://bugs.launchpad.net/bugs/1570067) created by **mkoniecz (matkoniecz)** ``` at http://duplicity.nongnu.org/duplicity.1.html#toc section after ""Query Parameters"" is titled in such way that indicates broken formatting (""mode=stripeThis mode (the default) performs round-robin access to the list ofbackends. In this mode, all backends must be reliable as a loss of onemeans a loss of one of the archive files.mode=mirrorThis mode accesses backends as a RAID1-store, storing every file inevery backend and reading files from the first-successful backend.A loss of any backend should result in no failure. Note that backendsadded later will only get new files and may require a manual syncwith one of the other operating ones.onfail=continueThis setting (the default) continues all write operations in asbest-effort. Any failure results in the next backend tried. Failureis reported only when all backends fail a given operation with theerror result from the last failure.onfail=abortThis setting considers any backend write failure as a terminatingcondition and reports the error.Data reading and listing operations are independent of this andwill try with the next backend on failure.JSON File Example"") ```",6 118021234,2016-04-11 04:19:48.540,B2 backend crashes when logging bad request (lp:#1568678),"[Original report](https://bugs.launchpad.net/bugs/1568678) created by **Mikael Moutakis (mikaelmoutakis)** ``` I'm using duplicity 0.7.06 on FreeBSD I was backuping a large (100 gig+) dataset to Backblaze B2. The program crashed with the following message Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1532, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1526, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1380, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1501, in do_backup full_backup(col_stats) File ""/usr/local/bin/duplicity"", line 567, in full_backup globals.backend) File ""/usr/local/bin/duplicity"", line 448, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/local/lib/python2.7/site- packages/duplicity/asyncscheduler.py"", line 146, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/local/lib/python2.7/site- packages/duplicity/asyncscheduler.py"", line 172, in __run_synchronously ret = fn(*params) File ""/usr/local/bin/duplicity"", line 447, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/local/bin/duplicity"", line 338, in put backend.put(tdp, dest_filename) File ""/usr/local/lib/python2.7/site-packages/duplicity/backend.py"", line 374, in inner_retry code = _get_code_from_exception(self.backend, operation, e) File ""/usr/local/lib/python2.7/site-packages/duplicity/backend.py"", line 346, in _get_code_from_exception return backend._error_code(operation, e) or log.ErrorCode.backend_error File ""/usr/local/lib/python2.7/site- packages/duplicity/backends/b2backend.py"", line 180, in _error_code return log.ErrorCode.bad_request NameError: global name 'log' is not defined ``` Original tags: b2",10 118021208,2016-04-05 15:02:45.190,assertion error when restoring backup (lp:#1566372),"[Original report](https://bugs.launchpad.net/bugs/1566372) created by **Jeff (westie76)** ``` Ubuntu 14.04.4 LTS deja-dup 30.0-0ubuntu4 duplicity 0.6.23-1ubuntu4.1 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1370, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 697, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 819, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 809, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 100, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20160402T104304Z.vol1.difftar.gz', 2: 'duplicity-full.20160402T104304Z.vol2.difftar.gz', 3: 'duplicity- full.20160402T104304Z.vol3.difftar.gz', 4: 'duplicity- full.20160402T104304Z.vol4.difftar.gz', 5: 'duplicity- full.20160402T104304Z.vol5.difftar.gz', 6: 'duplicity- full.20160402T104304Z.vol6.difftar.gz', 7: 'duplicity- full.20160402T104304Z.vol7.difftar.gz', 8: 'duplicity- full.20160402T104304Z.vol8.difftar.gz', 9: 'duplicity- full.20160402T104304Z.vol9.difftar.gz', 10: 'duplicity- full.20160402T104304Z.vol10.difftar.gz', 11: 'duplicity- full.20160402T104304Z.vol11.difftar.gz', 12: 'duplicity- full.20160402T104304Z.vol12.difftar.gz', 13: 'duplicity- full.20160402T104304Z.vol13.difftar.gz', 14: 'duplicity- full.20160402T104304Z.vol14.difftar.gz', 15: 'duplicity- full.20160402T104304Z.vol15.difftar.gz', 16: 'duplicity- full.20160402T104304Z.vol16.difftar.gz', 17: 'duplicity- full.20160402T104304Z.vol17.difftar.gz', 18: 'duplicity- full.20160402T104304Z.vol18.difftar.gz', 19: 'duplicity- full.20160402T104304Z.vol19.difftar.gz', 20: 'duplicity- full.20160402T104304Z.vol20.difftar.gz', 21: 'duplicity- full.20160402T104304Z.vol21.difftar.gz', 22: 'duplicity- full.20160402T104304Z.vol22.difftar.gz', 23: 'duplicity- full.20160402T104304Z.vol23.difftar.gz', 24: 'duplicity- full.20160402T104304Z.vol24.difftar.gz', 25: 'duplicity- full.20160402T104304Z.vol25.difftar.gz', 26: 'duplicity- full.20160402T104304Z.vol26.difftar.gz', 27: 'duplicity- full.20160402T104304Z.vol27.difftar.gz', 28: 'duplicity- full.20160402T104304Z.vol28.difftar.gz', 29: 'duplicity- full.20160402T104304Z.vol29.difftar.gz', 30: 'duplicity- full.20160402T104304Z.vol30.difftar.gz', 31: 'duplicity- full.20160402T104304Z.vol31.difftar.gz', 32: 'duplicity- full.20160402T104304Z.vol32.difftar.gz', 33: 'duplicity- full.20160402T104304Z.vol33.difftar.gz', 34: 'duplicity- full.20160402T104304Z.vol34.difftar.gz', 35: 'duplicity- full.20160402T104304Z.vol35.difftar.gz', 36: 'duplicity- full.20160402T104304Z.vol36.difftar.gz', 37: 'duplicity- full.20160402T104304Z.vol37.difftar.gz', 38: 'duplicity- full.20160402T104304Z.vol38.difftar.gz', 39: 'duplicity- full.20160402T104304Z.vol39.difftar.gz', 40: 'duplicity- full.20160402T104304Z.vol40.difftar.gz', 41: 'duplicity- full.20160402T104304Z.vol41.difftar.gz', 42: 'duplicity- full.20160402T104304Z.vol42.difftar.gz', 43: 'duplicity- full.20160402T104304Z.vol43.difftar.gz', 44: 'duplicity- full.20160402T104304Z.vol44.difftar.gz', 45: 'duplicity- full.20160402T104304Z.vol45.difftar.gz', 46: 'duplicity- full.20160402T104304Z.vol46.difftar.gz', 47: 'duplicity- full.20160402T104304Z.vol47.difftar.gz', 48: 'duplicity- full.20160402T104304Z.vol48.difftar.gz', 49: 'duplicity- full.20160402T104304Z.vol49.difftar.gz', 50: 'duplicity- full.20160402T104304Z.vol50.difftar.gz'}, 'duplicity- full.20160402T104304Z.vol1.difftar') ```",40 118021180,2016-03-16 14:11:13.534,OverflowError while writing signatures (lp:#1558093),"[Original report](https://bugs.launchpad.net/bugs/1558093) created by **Compizfox (compizfox)** ``` Hi, This is my first time backing up with Duplicity. I'm trying to backup around 450 GB total. The command I use is: duplicity --volsize 1000 --asynchronous-upload --full-if-older-than 1M --encrypt-key ""$keyid"" --exclude-filelist ""$excludelist"" ""$local"" ""$remote"" My first full backup is almost finished, but on the end I get this error: Writing duplicity-full-signatures.20160315T174511Z.sigtar.gpg WebDAV PUT /remote.php/webdav/duplicity_ILOS/duplicity-full- signatures.20160315T174511Z.sigtar.gpg request with headers: {'Connection': 'keep-alive', 'Authorization': 'Basic '} WebDAV data length: 3944337508 Attempt 1 failed. OverflowError: string longer than 2147483647 bytes Because of this, it can't finish the backup. What should I do? Duplicity version 0.7.06 Python version 2.7.11 FreeBSD 10.2-RELEASE Thanks in advance. ```",10 118021168,2016-03-07 16:19:50.661,Can't restore my backup (lp:#1554109),"[Original report](https://bugs.launchpad.net/bugs/1554109) created by **Antonio (antonio-herrera)** ``` Distributor ID: Ubuntu Description: Ubuntu 14.04.4 LTS Release: 14.04 Codename: trusty Deja Dup 34.1 Restore Failed Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1370, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 697, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 819, in get_backup_chains map(add_to_sets, filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 809, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 96, in add_filename self.set_manifest(filename) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 127, in set_manifest remote_filename) AssertionError: ('duplicity-full.20160221T020245Z.manifest.tar.gz', 'duplicity-full.20160221T020245Z.manifest') Please any help would be very much appreciated. I have tried everything but nothing seems to work. ```",8 118022627,2016-02-22 23:10:07.310,Problem restoring sparse files (lp:#1548549),"[Original report](https://bugs.launchpad.net/bugs/1548549) created by **hpgisler (hpgisleropen)** ``` Making a backup which contains sparse files (e.g. a file 'data' of size 100GB, with 1.3GB real size) seems to work, i.e. the backup.gpg files seem of ok size, i.e. 1..2GB. However, if restoring the file requires a lot of 'real' disk space (and seems quite slow); after approx. 8GB diskspace had been consumed, I aborted the restoring process.. Note, the disks size is approx 50GB. So, the creation of a real 100GB file would fail.. I've tried to add the following option to duplicity: --rsync-options --sparse However, restoring fails completely now after the temporary restore file in /temp folder reaches approx. 1GB (note: I've set the --volsize 1000), it seems setting the sparse switch make things worse rather than better. Perhaps I am missing something important here.. What is the correct handling procedure with large sparse files? ```",10 118021158,2016-02-15 10:41:58.230,restore --force complains about existing symlinks and doesn't overwrite them (lp:#1545666),"[Original report](https://bugs.launchpad.net/bugs/1545666) created by **Martin Häcker (spamfaenger)** ``` We use duplicity to restore backups from a main machine to a backup machine, where we use them to run a failover service. Doing this we noticed that we get a) complaints about existing symlinks (even though we use --force for the restore) and b) that those symlinks are not updated if they change on the master machine. I think that this is a bug, as --force as per the documentation should mean that files are overwritten. The command goes something like this: '/usr/bin/python /usr/bin/duplicity --name mirroring --encrypt-key $key --verbosity 4 -t now --force sftp://backup@backup.host/some_path /root/redumpster/mirrordir' And the output generated looks similar to this ``` Fehler '[Errno 17] File exists' beim Verarbeiten von some/path/symlink ``` Please note that I have verified that changes in the existing symlinks are not overwritten by a subsequent `restore --force`. ```",8 118022327,2016-02-11 20:05:04.103,Uploading full signatures for large backup set fails with B2 backend (lp:#1544707),"[Original report](https://bugs.launchpad.net/bugs/1544707) created by **Maakuth (markus-vuorio)** ``` I have a backup set of around 450GiB. Uploading it to B2 went fine, but at the end the backing up wasn't successful. I've tied it multiple times, but it always comes down to the same issue: ... Processed volume 17633 Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg Attempt 1 failed. SSLError: ('The read operation timed out',) Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg Attempt 2 failed. SSLError: ('The read operation timed out',) Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg Attempt 3 failed. SSLError: ('The read operation timed out',) Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg Attempt 4 failed. SSLError: ('The read operation timed out',) Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg ^NGiving up after 5 attempts. URLError: The file in question is around 3.4GiB. I'm thinking there's some special treatment one should do when uploading such big files to B2, but I'm not sure. I think B2 is supposed to support files up to 5GB. Duplicity 0.7.06 Python 2.7.9 Debian 8.3 ""Jessie"" Backing up from an XFS volume to Backblaze B2 I'll submit more log lines once I've captured them to a file. ``` Original tags: b2",12 118021153,2016-02-10 09:58:22.185,Using onedrive backend fails with TypeError (lp:#1543976),"[Original report](https://bugs.launchpad.net/bugs/1543976) created by **ThomasL. (tht)** ``` I'm using duply to call duplicity on a Debian Jessie system. It uploads a few hundred MB each day to onedrive and was working for months without any issues. The last successful backup was on Jan 26th and since this date (almost) all commands always fail with the same error message: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1580, in if ""Forced assertion for testing"" in str(e): TypeError: __str__ returned non-string (type Error) Yes, it fails MOST of the time, not always. Today the following command executed successfully about 2-3 times in 50 tries. So I think it's quite unlikely it's a local problem or an issue in the duplicity onedrive backend. Also a colleague has exactly the same issue since about the same date. /usr/bin/duplicity collection-status --name duply_owncloud --encrypt-key B44F3F99 --sign-key B44F3F99 --verbosity 9 --gpg-options --compress- algo=bzip2 --ssl-no-check-certificate onedrive://backups/ownCloud I modified line 1580 to print out a full backtrace and this is the complete output: # /usr/bin/duplicity collection-status --name duply_owncloud --encrypt-key B44F3F99 --sign-key B44F3F99 --verbosity 9 --gpg-options --compress- algo=bzip2 --ssl-no-check-certificate onedrive://backups/ownCloud Using archive dir: /root/.cache/duplicity/duply_owncloud Using backup name: duply_owncloud Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Using temporary directory /tmp/duplicity-4yt4Ft-tempdir Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1530, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1524, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1362, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1093, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/onedrivebackend.py"", line 90, in __init__ self.initialize_oauth2_session() File ""/usr/lib/python2.7/dist- packages/duplicity/backends/onedrivebackend.py"", line 129, in initialize_oauth2_session user_info_response = self.http_client.get(self.API_URI + 'me') File ""/usr/lib/python2.7/dist-packages/requests/sessions.py"", line 469, in get return self.request('GET', url, **kwargs) File ""/usr/lib/python2.7/dist- packages/requests_oauthlib/oauth2_session.py"", line 257, in request headers=headers, data=data, **kwargs) File ""/usr/lib/python2.7/dist-packages/requests/sessions.py"", line 457, in request resp = self.send(prep, **send_kwargs) File ""/usr/lib/python2.7/dist-packages/requests/sessions.py"", line 569, in send r = adapter.send(request, **kwargs) File ""/usr/lib/python2.7/dist-packages/requests/adapters.py"", line 420, in send raise SSLError(e, request=request) SSLError: I've no experience in Python but for me this looks like an issue with SSL. Most likely provoked by the server side. And this is the output on a successful run (some minutes later) # /usr/bin/duplicity collection-status --name duply_owncloud --encrypt-key B44F3F99 --sign-key B44F3F99 --verbosity 9 --gpg-options --compress- algo=bzip2 --ssl-no-check-certificate onedrive://backups/ownCloud Using archive dir: /root/.cache/duplicity/duply_owncloud Using backup name: duply_owncloud Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded OneDrive id for the configured directory ""backups/ownCloud"" is ""folder.56f6b3babc8c79f2.56F6B3BABC8C79F2!1494"" Main action: collection-status ================================================================================ duplicity 0.7.03 (May 11, 2015) Args: /usr/bin/duplicity collection-status --name duply_owncloud --encrypt- key B44F3F99 --sign-key B44F3F99 --verbosity 9 --gpg-options --compress- algo=bzip2 --ssl-no-check-certificate onedrive://backups/ownCloud Linux srvt01 4.2.0-0.bpo.1-amd64 #1 SMP Debian 4.2.6-3~bpo8+2 (2015-12-14) x86_64 /usr/bin/python 2.7.9 (default, Mar 1 2015, 12:57:24) [GCC 4.9.2] ================================================================================ Local and Remote metadata are synchronized, no sync needed. 797 files exist on backend 163 files exist in cache Extracting backup chains from list of files: [u'duplicity- full.20150301T233003Z.manifest.gpg', u'duplicity- full.20150301T233003Z.vol1.difftar.gpg', u'duplicity- full.20150301T233003Z.vol10.difftar.gpg', u'duplicity- full.20150301T233003Z.vol11.difftar.gpg', u'duplicity- full.20150301T233003Z.vol12.difftar.gpg', u'duplicity- full.20150301T233003Z.vol13.difftar.gpg', u'duplicity- full.20150301T233003Z.vol14.difftar.gpg', u'duplicity- full.20150301T233003Z.vol15.difftar.gpg', u'duplicity- full.20150301T233003Z.vol16.difftar.gpg', u'duplicity- -- LOTS OF LINES REMOVED -- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Mon Jan 4 00:30:05 2016 Chain end time: Tue Jan 26 00:30:06 2016 Number of contained backup sets: 21 Total number of contained volumes: 103 Type of backup set: Time: Num volumes: Full Mon Jan 4 00:30:05 2016 83 Incremental Tue Jan 5 00:30:07 2016 1 Incremental Wed Jan 6 00:30:06 2016 1 Incremental Thu Jan 7 00:30:08 2016 1 Incremental Fri Jan 8 00:30:06 2016 1 Incremental Sat Jan 9 00:30:06 2016 1 Incremental Sun Jan 10 00:30:10 2016 1 Incremental Mon Jan 11 00:30:07 2016 1 Incremental Tue Jan 12 00:30:08 2016 1 Incremental Wed Jan 13 00:30:06 2016 1 Incremental Thu Jan 14 00:30:07 2016 1 Incremental Fri Jan 15 00:30:06 2016 1 Incremental Sat Jan 16 00:30:07 2016 1 Incremental Sun Jan 17 00:30:06 2016 1 Incremental Mon Jan 18 00:30:06 2016 1 Incremental Tue Jan 19 00:30:05 2016 1 Incremental Wed Jan 20 00:30:06 2016 1 Incremental Thu Jan 21 00:30:05 2016 1 Incremental Fri Jan 22 00:30:07 2016 1 Incremental Sun Jan 24 00:30:06 2016 1 Incremental Tue Jan 26 00:30:06 2016 1 ------------------------- No orphaned or incomplete backup sets found. Releasing lockfile Using temporary directory /tmp/duplicity-9bGfwW-tempdir Duplicity is absolutely perfect to backup to storage you don't trust (as onedrive). But it looks like the access to onedrive is so unreliable since a few weeks it's unusable. I've tried accessing onedrive using the web browser and this works perfect all the time and I still have about 800GB of free space on my onedrive account. Anyone else experiencing the same problems? Any ideas how to solve this? My environment: - duplicity 0.7.03 - duply v1.9.1 - Python 2.7.9 - Debian Jessie 64bit ```",6 118021137,2016-01-31 22:25:32.083,PyDrive Backend Fails: RelativeURIError (lp:#1540161),"[Original report](https://bugs.launchpad.net/bugs/1540161) created by **JP (0j-p)** ``` Updated duplicity 2016-01-29 in Changelog.GNU, ran my backup script for pydrive and hit this error: Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Failed: No module named kerberos Using temporary directory /tmp/duplicity-TdL142-tempdir Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1537, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1531, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1369, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1116, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1005, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/pydrivebackend.py"", line 58, in __init__ file_list = self.drive.ListFile({'q': ""'Root' in parents and trashed=false""}).GetList() File ""/usr/local/lib/python2.7/dist-packages/pydrive/apiattr.py"", line 154, in GetList for x in self: File ""/usr/local/lib/python2.7/dist-packages/pydrive/apiattr.py"", line 138, in next result = self._GetList() File ""/usr/local/lib/python2.7/dist-packages/pydrive/auth.py"", line 54, in _decorated return decoratee(self, *args, **kwargs) File ""/usr/local/lib/python2.7/dist-packages/pydrive/files.py"", line 56, in _GetList self.metadata = self.auth.service.files().list(**dict(self)).execute() File ""/usr/local/lib/python2.7/dist-packages/oauth2client/util.py"", line 140, in positional_wrapper return wrapped(*args, **kwargs) File ""/usr/local/lib/python2.7/dist-packages/googleapiclient/http.py"", line 722, in execute body=self.body, headers=self.headers) File ""/usr/local/lib/python2.7/dist-packages/oauth2client/client.py"", line 596, in new_request redirections, connection_type) File ""/usr/lib/python2.7/dist-packages/httplib2/__init__.py"", line 1440, in request (scheme, authority, request_uri, defrag_uri) = urlnorm(uri) File ""/usr/lib/python2.7/dist-packages/httplib2/__init__.py"", line 217, in urlnorm raise RelativeURIError(""Only absolute URIs are allowed. uri = %s"" % uri) RelativeURIError: Only absolute URIs are allowed. uri = https:/drive/v2/files?q=%27Root%27+in+parents+and+trashed%3Dfalse&alt=json&maxResults=1000 ```",6 118021129,2016-01-25 15:53:56.094,Progress should override output (lp:#1537799),"[Original report](https://bugs.launchpad.net/bugs/1537799) created by **Wernight (werner-beroux)** ``` Currently outputs with --progress look like: 2.6GB 00:29:40 [1.6MB/s] [==========> ] 26% ETA 1h 21min 2.6GB 00:29:43 [1.6MB/s] [==========> ] 26% ETA 1h 21min 2.6GB 00:29:46 [1.4MB/s] [==========> ] 26% ETA 1h 21min 2.6GB 00:29:49 [1.5MB/s] [==========> ] 26% ETA 1h 21min 2.7GB 00:29:52 [1.5MB/s] [==========> ] 26% ETA 1h 21min 2.7GB 00:29:55 [1.6MB/s] [==========> ] 26% ETA 1h 21min 2.7GB 00:29:58 [1.6MB/s] [==========> ] 26% ETA 1h 21min 2.7GB 00:30:01 [1.4MB/s] [==========> ] 26% ETA 1h 21min 2.7GB 00:30:04 [1.5MB/s] [==========> ] 27% ETA 1h 21min 2.7GB 00:30:07 [1.5MB/s] [==========> ] 27% ETA 1h 21min 2.7GB 00:30:10 [1.6MB/s] [==========> ] 27% ETA 1h 21min 2.7GB 00:30:13 [1.6MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:17 [1.6MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:20 [1.4MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:23 [1.5MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:26 [1.6MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:29 [1.6MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:32 [1.7MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:35 [1.4MB/s] [==========> ] 27% ETA 1h 20min 2.7GB 00:30:38 [1.4MB/s] [===========> ] 27% ETA 1h 20min 2.7GB 00:30:41 [1.5MB/s] [===========> ] 27% ETA 1h 20min It could easily just update that line. There are two simple ways to do that (in Python): import sys from time import sleep # '\r': Simpler way, having issue when console width is too small and output gets wrapped. for i in range(0, 101, 5): sys.stdout.write(""Download progress: {}% \r"".format(i)) sys.stdout.flush() # Do work here... sleep(0.1) sys.stdout.write('\n') # '\b': Better way sys.stdout.write(""Download progress: "") for i in range(0, 101, 5): s = '%d%%' % i sys.stdout.write(s) # Do work here... sleep(0.1) sys.stdout.write('\b' * len(s)) sys.stdout.write('\n') ```",10 118021126,2016-01-15 16:55:04.530,backblaze connection issue? (lp:#1534663),"[Original report](https://bugs.launchpad.net/bugs/1534663) created by **Steven Verbeek (dubcanada)** ``` Hello, I am using Duplicity 0.7.06 from the stable branch on Ubuntu 14.04 and I have tried two different servers on two completely different networks. No matter what I do I cannot upload to Backblaze B2 using Duplicity, the Backblaze python pusher and other tools by them all work fine. But duplicity fails when trying to upload. Backblaze keeps sending me either Backtrace of previous error: Traceback (innermost last): File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 365, in inner_retry return fn(self, *args) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 540, in put self.__do_put(source_path, remote_filename) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 526, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/b2backend.py"", line 116, in _put self.get_or_post(url, None, headers, data_file=data_file) File ""/usr/lib/python2.7/dist-packages/duplicity/backends/b2backend.py"", line 256, in get_or_post with OpenUrl(url, data, encoded_headers) as resp: File ""/usr/lib/python2.7/dist-packages/duplicity/backends/b2backend.py"", line 332, in __enter__ self.file = urllib2.urlopen(request) File ""/usr/lib/python2.7/urllib2.py"", line 127, in urlopen return _opener.open(url, data, timeout) File ""/usr/lib/python2.7/urllib2.py"", line 404, in open response = self._open(req, data) File ""/usr/lib/python2.7/urllib2.py"", line 422, in _open '_open', req) File ""/usr/lib/python2.7/urllib2.py"", line 382, in _call_chain result = func(*args) File ""/usr/lib/python2.7/urllib2.py"", line 1222, in https_open return self.do_open(httplib.HTTPSConnection, req) File ""/usr/lib/python2.7/urllib2.py"", line 1184, in do_open raise URLError(err) URLError: or http://laravel.io/bin/kWmnE errors. I contacted them about the errors and they said they were going to fix the 500 one, so that one can be ignored. ```",6 118021119,2015-11-16 21:19:50.942,0.6.25->0.7.05 fatal performance regression (lp:#1516788),"[Original report](https://bugs.launchpad.net/bugs/1516788) created by **Jan Kratochvil (jan-kratochvil)** ``` Running on updated Fedora 23 x86_64: duplicity-0.7.05-1.fc23.x86_64 21:57:20 lstat(""/etc/rc.d/rc4.d/K74gfs2"", {st_mode=S_IFLNK|0777, st_size=14, ...}) = 0 21:57:20 readlink(""/etc/rc.d/rc4.d/K74gfs2"", ""../init.d/gfs2"", 4096) = 14 21:57:24 lstat(""/etc/rc.d/rc4.d/K85ebtables"", {st_mode=S_IFLNK|0777, st_size=18, ...}) = 0 21:57:24 readlink(""/etc/rc.d/rc4.d/K85ebtables"", ""../init.d/ebtables"", 4096) = 18 21:57:28 lstat(""/etc/rc.d/rc4.d/K90network"", {st_mode=S_IFLNK|0777, st_size=17, ...}) = 0 21:57:28 readlink(""/etc/rc.d/rc4.d/K90network"", ""../init.d/network"", 4096) = 17 21:57:33 lstat(""/etc/rc.d/rc4.d/S01sandbox"", {st_mode=S_IFLNK|0777, st_size=17, ...}) = 0 21:57:33 readlink(""/etc/rc.d/rc4.d/S01sandbox"", ""../init.d/sandbox"", 4096) = 17 - it was running for 2 hours and reading each file for 4 seconds is unusable. 26.69% duplicity libpython2.7.so.1.0 [.] PyEval_EvalFrameEx 4.73% duplicity libc-2.22.so [.] vfprintf 3.70% duplicity libpython2.7.so.1.0 [.] lookdict_string 3.70% duplicity libpython2.7.so.1.0 [.] PyUnicodeUCS4_DecodeUTF8Stateful 3.47% duplicity libpython2.7.so.1.0 [.] PyString_Format 3.11% duplicity libpython2.7.so.1.0 [.] _PyObject_GenericGetAttrWithDict 3.09% duplicity libpython2.7.so.1.0 [.] PyEval_EvalCodeEx 1.85% duplicity libpython2.7.so.1.0 [.] convertitem 1.85% duplicity libpython2.7.so.1.0 [.] PyFrame_New After downgrading just the package duplicity to 0.6.25 the backup was successfuly done in 4 minutes, like always before: duplicity-0.6.25-3.fc21.x86_64 --------------[ Backup Statistics ]-------------- StartTime 1447707702.06 (Mon Nov 16 22:01:42 2015) EndTime 1447707987.63 (Mon Nov 16 22:06:27 2015) ElapsedTime 285.57 (4 minutes 45.57 seconds) SourceFiles 92925 SourceFileSize 2516870978 (2.34 GB) NewFiles 51428 NewFileSize 179314732 (171 MB) DeletedFiles 22722 ChangedFiles 1196 ChangedFileSize 284383140 (271 MB) ChangedDeltaSize 0 (0 bytes) DeltaEntries 75346 RawDeltaSize 210169249 (200 MB) TotalDestinationSizeChange 54708267 (52.2 MB) Errors 0 ------------------------------------------------- 11G host1-backup Unaware how to do Python-source-level performance monitoring. duplicity --archive-dir /root/backup/host2-signature2 --allow-source- mismatch --encrypt-key 1F0D6D7B --sign-key 1F0D6D7B --exclude-other- filesystems --exclude-filelist /tmp/host2-run.rpmsafe --exclude /var/spool/squid [...] --exclude '/usr/lib/jvm/*/jre/lib/amd64/server/classes.jsa' / file:///host1/root/backup/host2-backup2 3.0G /root/backup/host2-signature2 68G /host1/root/backup/host2-backup2 wc -l: 558303 /tmp/host2-run.rpmsafe python-2.7.10-8.fc23.x86_64 Linux ext4 ```",6 118021116,2015-11-01 05:16:25.212,"Misleading ""Ignoring incremental Backupset"" message (lp:#1512055)","[Original report](https://bugs.launchpad.net/bugs/1512055) created by **Tobias G. Pfeiffer (tgpfeiffer)** ``` For a while I have been doing my duplicity backups to Glacier now. First I used a kind of hack to do a normal AWS upload, then rename all the difftar files to match S3 lifecycle rules. Using that method, I got ""Ignoring incremental Backupset"" messages on subsequent backups, maybe because there were no data files found using the expected file name pattern. Then recently I upgraded to the latest duplicity version and switched to the `--file-prefix-archive` method, which feels a lot better. However, I still had the ""Ignoring incremental Backupset"" message in my logs which confused me a bit, since all commands seem to behave correctly. I found out that when an incremental backupset is found, `chain.add_inc(set)` is called on all chains until one of these calls returns True, and every call before will emit an ""Ignoring incremental Backupset"" message. So the very normal operation of ""finding the correct chain"" emits a message that indicates something is ignored or missing or not matching etc. I think the functionality of finding the correct chain should be changed so that no such warning message is emitted in order to avoid user confusion. ```",10 118021115,2015-10-11 08:50:16.053,pydrive backend cannot recover if remote files missing (lp:#1504908),"[Original report](https://bugs.launchpad.net/bugs/1504908) created by **Kuang-che Wu (kcwu)** ``` How to reproduce duplicity 0.7.05 1. backup with pydrive backend 2. delete one of manifest.gpg file on remote server 3. run duplicity backup again error messages: Attempt 1 failed. AttributeError: 'NoneType' object has no attribute 'GetContentFile' Attempt 2 failed. AttributeError: 'NoneType' object has no attribute 'GetContentFile' Attempt 3 failed. AttributeError: 'NoneType' object has no attribute 'GetContentFile' Attempt 4 failed. AttributeError: 'NoneType' object has no attribute 'GetContentFile' Giving up after 5 attempts. AttributeError: 'NoneType' object has no attribute 'GetContentFile' FYI, I have a patch but I don't know whether it is a correct fix or not. diff --git a/duplicity/backends/pydrivebackend.py b/duplicity/backends/pydrivebackend.py index 2b1a805..4181dd8 100644 --- a/duplicity/backends/pydrivebackend.py +++ b/duplicity/backends/pydrivebackend.py @ @ -135,6 +135,8@ @ class PyDriveBackend(duplicity.backend.Backend): def _get(self, remote_filename, local_path): drive_file = self.file_by_name(remote_filename) + if drive_file is None: + raise BackendException(""failed to get '%s'"" % remote_filename) drive_file.GetContentFile(local_path.name) def _list(self): ```",6 118021107,2015-10-08 05:10:10.986,With 70% free space duplicity dies with [Errno 28] No space left on device: (lp:#1503960),"[Original report](https://bugs.launchpad.net/bugs/1503960) created by **Wojciech Adam Koszek (wkoszek)** ``` I have the Synology DS214play with Atom CPU and 1GB of RAM. I have RAID mirror with 2x1TB disks, and 2 USB volumes attached: 320GB and 1TB disk. In this report I attempt to test-drive duplicity on 1TB volume by running: root@wkoszek_nas:/mnt/volumeUSB2# time duplicity --no-encryption /home/wkoszek file://`pwd`/backup-duplicity.20151005 > duplicity.report.txt [Errno 28] No space left on device: '/mnt/volumeUSB2/backup- duplicity.20151005/duplicity-full.20151006T085704Z.vol13107.difftar.gz' real 1724m51.360s user 1449m55.449s sys 118m55.726s Report file: Local and Remote metadata are synchronized, no sync needed. Last full backup left a partial set, restarting. Last full backup date: Tue Oct 6 01:55:12 2015 RESTART: The first volume failed to upload before termination.          Restart is impossible...starting backup from beginning. Local and Remote metadata are synchronized, no sync needed. Last full backup date: none No signatures found, switching to full backup. Mount points: /dev/root 2451064 1594404 754260 68% / /tmp 358092 180 357912 1% /tmp /run 358092 2544 355548 1% /run /dev/shm 358092 0 358092 0% /dev/shm /volume1/homes 956675772 922960672 33612700 97% /volume1/@appstore/debian-chroot/var/chroottarget/home /volumeUSB1/usbshare 312494912 153154016 159340896 50% /volume1/@appstore/debian-chroot/var/chroottarget/mnt/volumeUSB1 /volumeUSB2/usbshare 976251072 335877424 640373648 35% /volume1/@appstore/debian-chroot/var/chroottarget/mnt/volumeUSB2 /dev 355860 4 355856 1% /volume1/@appstore/debian-chroot/var/chroottarget/dev I run duplicity from chroot'ed Debian environment, with duplicity installed via apt-get. Version: root@wkoszek_nas:/mnt/volumeUSB2# duplicity --version duplicity 0.6.24 Dir entry to the backup directory: drwxrwxrwx 2 1024 users 2097152 Oct 7 06:41 backup-duplicity.20151005 ---------------- this paragraph is wrong and written by mistake ---------- I couldn't figure out what's wrong, but I feel this is a problem: root@wkoszek_nas:/mnt/volumeUSB2# du -ch backup-duplicity.20151005/ 320G backup-duplicity.20151005/ 320G total root@wkoszek_nas:/mnt/volumeUSB2# ls -1 backup-duplicity.20151005/ | wc -l 13106 And some random websites state that VFAT (which I believe indicates FAT32) has a low limit of files in a directory: http://superuser.com/questions/446282/max-files-per-directory-on-ntfs-vol- vs-fat32 ------------------------------------------------------------------------ This question remains the same: Is there are a good reason why duplicity is storing all files in 1 directory? ```",6 118019376,2015-10-03 18:15:13.978,Major Storage Space Use After Moving Files (lp:#1502478),"[Original report](https://bugs.launchpad.net/bugs/1502478) created by **Christoph Michelbach (hj7-c)** ``` I had a suspicion that Deja Dup saves files again after they have been moved to a different location for some time. Since I now had to move my pictures and some documents, I tested it: I made a backup via Deja Dup, looked how much storage space the backup takes by now, moved what I wanted to move to where I wanted to move it to, made another backup, and looked how much space it takes up now. So I started out at 38.2 GB and the backup now uses 46.3 GB. The difference of 8.1 GB also is the size of the stuff I moved. This behavior takes up a lot of storage space unnecessarily and could be avoided by identifying a file by its hash. $ dpkg-query -W deja-dup duplicity deja-dup 32.0-0ubuntu5 duplicity 0.7.01-1ubuntu1 ```",6 118021097,2015-09-26 09:52:00.419,"""AssertionError len(chain_list) == 2"" after completing a full backup, unable to do incremental (lp:#1499990)","[Original report](https://bugs.launchpad.net/bugs/1499990) created by **Artur Bodera (abodera)** ``` duplicity 0.7.04 (August 02, 2015) /usr/local/opt/python/bin/python2.7 2.7.10 (default, Jul 13 2015, 12:05:58) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] Upon finishing a multi-day, initial, full backup, i've received the following at the very end: --------------------- Traceback (most recent call last): File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1528, in with_tempdir(main) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1522, in with_tempdir fn() File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1376, in main do_backup(action) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1401, in do_backup globals.archive_dir).set_values() File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 721, in set_values backup_chains) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 734, in set_matched_chain_pair sig_chains = sig_chains and self.get_sorted_chains(sig_chains) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 951, in get_sorted_chains assert len(chain_list) == 2 AssertionError ---------------------- I am unable to start another incremental, because the error keeps appearing every time. My local and remote meta are synchronized: ---------- l ~/.cache/duplicity/backup-name total 24810848 drwxr-xr-x 6 Thinkscape staff 204B Sep 26 11:17 . drwxr-xr-x 7 Thinkscape staff 238B Sep 20 13:40 .. -rw-r--r-- 1 Thinkscape staff 5.0G Sep 26 11:12 duplicity-full- signatures.20150920T135054Z.sigtar.gpg.1.gz -rw-r--r-- 1 Thinkscape staff 933M Sep 26 11:17 duplicity-full- signatures.20150920T135054Z.sigtar.gpg.2.gz -rw------- 1 Thinkscape staff 5.9G Sep 26 04:37 duplicity-full- signatures.20150920T135054Z.sigtar.gz -rw------- 1 Thinkscape staff 8.4M Sep 26 04:37 duplicity- full.20150920T135054Z.manifest ----------- Here's incremental attempt (paths and names obfuscated): -------------------- Mode not provided - assuming incremental Using archive dir: dir Using backup name: backup-name Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: inc ================================================================================ duplicity 0.7.04 (August 02, 2015) Args: /usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity incremental --asynchronous-upload --verbosity info --name backup-name --encrypt-key AAAAAAAAA --volsize 256 --exclude /path/*.sparsebundle --log-file /var/log/duplicity.log /path cf+hubic://backup Darwin Taco.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 i386 /usr/local/opt/python/bin/python2.7 2.7.10 (default, Jul 13 2015, 12:05:58) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] ================================================================================ Using temporary directory /var/folders/sn/lw4b5p6s6793ghngffqsy8cc0000gn/T/duplicity-_YiLnY-tempdir Temp has 188218208256 available, backup will use approx 617401548. Local and Remote metadata are synchronized, no sync needed. Traceback (most recent call last): File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1528, in with_tempdir(main) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1522, in with_tempdir fn() File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1376, in main do_backup(action) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/bin/duplicity"", line 1401, in do_backup globals.archive_dir).set_values() File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 721, in set_values backup_chains) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 734, in set_matched_chain_pair sig_chains = sig_chains and self.get_sorted_chains(sig_chains) File ""/usr/local/Cellar/duplicity/0.7.04_1/libexec/lib/python2.7/site- packages/duplicity/collections.py"", line 951, in get_sorted_chains assert len(chain_list) == 2 AssertionError ---------------- Please help. ```",10 118021091,2015-09-21 19:58:04.365,"SSLError: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] (lp:#1498165)","[Original report](https://bugs.launchpad.net/bugs/1498165) created by **Jorman Franzini (jorman-franzini)** ``` Hi, I've some problem on duplicity run, my config is: Using installed duplicity version 0.7.05, python 2.7.3, gpg 1.4.7 (Home: ~/.gnupg), awk 'GNU Awk 4.0.1', grep 'grep (GNU grep) 2.12', bash '3.2.54(1)-release (i686-unknown-linux-gnu)'. And this's the log for the run: Traceback (most recent call last): File ""/opt/bin/duplicity"", line 1525, in with_tempdir(main) File ""/opt/bin/duplicity"", line 1519, in with_tempdir fn() File ""/opt/bin/duplicity"", line 1357, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/opt/lib/python2.7/site-packages/duplicity/commandline.py"", line 1103, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/opt/lib/python2.7/site-packages/duplicity/commandline.py"", line 994, in set_backend globals.backend = backend.get_backend(bend) File ""/opt/lib/python2.7/site-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/opt/lib/python2.7/site-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/opt/lib/python2.7/site- packages/duplicity/backends/onedrivebackend.py"", line 90, in __init__ self.initialize_oauth2_session() File ""/opt/lib/python2.7/site- packages/duplicity/backends/onedrivebackend.py"", line 129, in initialize_oauth2_session user_info_response = self.http_client.get(self.API_URI + 'me') File ""/opt/local/lib/python2.7/site-packages/requests/sessions.py"", line 477, in get return self.request('GET', url, **kwargs) File ""/opt/local/lib/python2.7/site- packages/requests_oauthlib/oauth2_session.py"", line 287, in request token = self.refresh_token(self.auto_refresh_url) File ""/opt/local/lib/python2.7/site- packages/requests_oauthlib/oauth2_session.py"", line 250, in refresh_token timeout=timeout, verify=verify) File ""/opt/local/lib/python2.7/site-packages/requests/sessions.py"", line 508, in post return self.request('POST', url, data=data, json=json, **kwargs) File ""/opt/local/lib/python2.7/site- packages/requests_oauthlib/oauth2_session.py"", line 303, in request headers=headers, data=data, **kwargs) File ""/opt/local/lib/python2.7/site-packages/requests/sessions.py"", line 465, in request resp = self.send(prep, **send_kwargs) File ""/opt/local/lib/python2.7/site-packages/requests/sessions.py"", line 573, in send r = adapter.send(request, **kwargs) File ""/opt/local/lib/python2.7/site-packages/requests/adapters.py"", line 431, in send raise SSLError(e, request=request) SSLError: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] Any idea? J ```",6 118021087,2015-09-19 01:14:39.865,wishlist: Please support Amazon's new Standard-IA storage option (lp:#1497487),"[Original report](https://bugs.launchpad.net/bugs/1497487) created by **az (az-debian)** ``` this is a forward of debian bug 799237 which lives over there: http://bugs.debian.org/799237 the original submitter requested the following feature amendment: ---quote--- Amazon just announced a new storage option, ""Standard - Infrequently Accessed"", with pricing that makes sense for storing backups. Please consider adding a duplicity option to support it, similar to the existing --s3-use-rrs option to use Reduced Redundancy storage. It might make sense for the new option to support passing the name of a storage class, rather than introducing a new boolean option, to make it easier to add more storage classes in the future. ---quote--- ```",12 118018978,2015-09-15 21:37:27.170,Add option for volume numbering width (lp:#1496153),"[Original report](https://bugs.launchpad.net/bugs/1496153) created by **Dionisio E Alonso (baco)** ``` It would be nice to have the option to set the volume numbering width (number of digits for volXXX) in order to generate names like duplicity- full-...-vol01.difftar... or ...-vol001... so they became ordered when listed sorted by name. ``` Original tags: enhacement",6 118021084,2015-09-09 02:41:46.371,Wishlist: Amazon Cloud Drive backend (lp:#1493632),"[Original report](https://bugs.launchpad.net/bugs/1493632) created by **Elan Kugelmass (epkugelmass)** ``` I'd like to see a backend for Amazon Cloud Drive included with duplicity. Amazon recently changed their pricing strategy to be much more aggressive. They are offering unlimited storage for about $60/year. They also recently exposed a REST api for the service (https://developer.amazon.com/public/apis/experience/cloud- drive/content/restful-api). ACS is presumably powered by S3 -- and if the APIs are similar, this might be a relatively easy (and much appreciated!) backend to implement. ``` Original tags: wishlist",20 118021069,2015-08-26 19:47:03.535,Pydrive broken resume: not dealing with duplicate filenames (lp:#1489145),"[Original report](https://bugs.launchpad.net/bugs/1489145) created by **mfitz (mfitz)** ``` Google drive supports two files with the same name in the same place. This isn't being handled currently. Resume from interrupted full backup does not currently work, here are some reasons: * On resume, it re-uploads last file in manifest without deleting it from the server beforehand, resulting in two files of the same name. It attempts to add the file to an internal python list/dictionary and crashes. * On occasion, the same file is added twice. On reading the server, it attempts to add the file to an internal python list/dictionary and crashes. Workarounds: * On resume, always delete last file. Manifest should always contain more files. * When duplicate files are found ""in the middle"", duplicity will crash, giving the duplicate filename in the array printed (last element). Do this over and over until all dupes are gone. Duplicity 7.04 Python 2.7.10 ```",6 118021066,2015-08-17 20:41:04.678,Pydrive: tabs in google_drive_settings (lp:#1485756),"[Original report](https://bugs.launchpad.net/bugs/1485756) created by **mfitz (mfitz)** ``` Duplicity version:7.04 ""Google drive settings file"" as written in the man page doesn't make clear that the file is yaml and therefore cannot contain tab characters. Currently, if the settings file contains nonsense such as a tab character, it is skipped. An unrelated error to do with json files will be displayed instead because all the settings are ignored. ```",6 118018963,2015-08-08 07:42:06.716,Progress not working with many backends (lp:#1482841),"[Original report](https://bugs.launchpad.net/bugs/1482841) created by **Artur Bodera (abodera)** ``` The --progress flag and progress bar seem to be broken with a lot of backends. So far I've tested CF+Hubic and onedrive, both behave the same way. With --verbosity debug I can see that subsequent volumes are being sent to backend for transfer, backend keeps transferring them, but the progress bar after 8 hours of work looks the same: 0.0KB 08:00:00 [0.0B/s] [> ] 0% ETA Stalled! 0.0KB 08:00:03 [0.0B/s] [> ] 0% ETA Stalled! 0.0KB 08:00:06 [0.0B/s] [> ] 0% ETA Stalled! 0.0KB 08:00:09 [0.0B/s] [> ] 0% ETA Stalled! It never moves. I understand that some backends might not be good at reporting speed or file transfer progress, but I don't see why the progress bar would not be updated at least after each volume has been transferred ( no. MB transferred / no MB total to transfer = progress). ```",162 118019205,2015-08-01 15:00:42.258,Replace optparse with argparse for python 2.7+ (lp:#1480565),"[Original report](https://bugs.launchpad.net/bugs/1480565) created by **Aaron Whitehouse (aaron-whitehouse)** ``` As of python 2.7, optparse is deprecated in favour of argparse: https://docs.python.org/2/library/optparse.html https://docs.python.org/2/library/argparse.html At some point, we should therefore change commandline.py to use argparse instead of optparse. Note that argparse is new in 2.7, so isn't included by default in python 2.6 and below. It is, however, available as a separate package that is compatible with python 2.3+. Options for supporting python 2.6 after updating commandline.py would be to require the installation of the separate package, or include a copy of the module with duplicity. ```",6 118021063,2015-07-06 14:04:07.423,collection-status removes local running backup files (sigtar) (lp:#1471818),"[Original report](https://bugs.launchpad.net/bugs/1471818) created by **John Jasen (jjasen)** ``` Duplicity: 0.6.18-3 (debian stock) Python: 2.7.3-4+deb7 (debian stock) OS: Debian GNU/Linux 7.8 (wheezy) We run duplicity --collection-status as part of periodic system health cron job (to feed passive nagios checks). The health script runs every 5 minutes. During the normal backup run, on the local disk, signatures are written to $filename.sigtar.part. That file is then copied to $filename.sigtar and gzipped. If $filename.sigtar.part is being copied to $filename.sigtar during a collection-status, duplicity will remove $filename.sigtar. These cause local backups to show as failed/incomplete, and for duplicity to fail with an error -- usually something along the lines of: ""OSError: [Errno 2] No such file or directory: '/var/cache/duplicity/$hostname/@/duplicity-full-signatures.$date-time- stamp.sigtar'"" This can be repeated, perhaps, by unzipping $filename.sigtar.gz and running collection-status. It can definitely be repeated by cp $filename.sigtar.part to $filename.sigtar, and running collection-status. ```",6 118021061,2015-06-15 07:45:25.635,TotalDestinationSizeChange doesn't count par2 data (lp:#1465165),"[Original report](https://bugs.launchpad.net/bugs/1465165) created by **Kuang-che Wu (kcwu)** ``` Currently, TotalDestinationSizeChange is just summed up the data volume. However, backends may have additional overhead specific to each backend. Especially, the par2 redudency data files generated by par2 backend are not counted in TotalDestinationSizeChange. In other words, the general form of this problem is, TotalDestinationSizeChange should be calculated by backends. ```",6 118021050,2015-06-08 04:41:10.708,pydrive backend uploads volumes multiple times (lp:#1462862),"[Original report](https://bugs.launchpad.net/bugs/1462862) created by **Zhongfu Li (zhongfu)** ``` When duplicity is backing up with pydrive, it occasionally uploads volumes and updates the manifest with those volumes multiple times (although I've only seen it done twice, and usually one volume at a time). I've not exactly pinpointed the cause, but it seems like something to do with asynchronous uploads (even though I have not explicitly passed --asynchronous-upload). It seems to happen randomly, but one way that I've managed to reproduce this is to stop the backup and restart it later. When this happens, I see duplicates of the volumes in question, both on Google Drive (as Google Drive allows multiple files in the same folder with the same name) and in the manifest (where the entries are repeated, but with differing checksums). Sometimes, one of the tar archives are incomplete (reports ""a lone zero block""), but when it's induced by restarting the backup, the volumes seem to extract without any errors. A file with an incorrect checksum might also be uploaded, but this is very rare in my (short) experience. -------------------------------------------------- Duplicity version: duplicity 0.7.03+bzr1099 (latest version from trunk ppa) Python version: Python 2.7.9 OS: Ubuntu 15.04 vivid Target filesystem: Linux, btrfs (to Google Drive) (Apologies for only providing a log for verbosity 5 -- it's quite troublesome to attempt to reproduce this, but I can do it at your request) ``` Original tags: duply gdocs pydrive",10 118021048,2015-05-31 22:59:03.384,webdav fails updating hidden directories (lp:#1460489),"[Original report](https://bugs.launchpad.net/bugs/1460489) created by **JP (0j-p)** ``` Duplicity version: duplicity 0.7.02 Python version: Python 2.7.9 OS Distro and version: Ubuntu 15.04 Type of target filesystem: webdav duplicity -v9 ""/home/foo/.session/"" ""webdavs://user:pass@site.url/foo/bar/.session"" WebDAV response status 409 with reason 'Conflict'. Backtrace of previous error: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 365, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 515, in put self.__do_put(source_path, remote_filename) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 501, in __do_put self.backend._put(source_path, remote_filename) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/webdavbackend.py"", line 409, in _put raise e BackendException: Bad status code 409 reason Conflict. Attempt 1 failed. BackendException: Bad status code 409 reason Conflict. ``` Original tags: duplicity webdav",6 118021045,2015-05-25 10:24:32.392,Backend should use subfolders when available (lp:#1458507),"[Original report](https://bugs.launchpad.net/bugs/1458507) created by **Diego Peinador (diegopeinador)** ``` Some backends can't handle many files in a single folder (i.e. FTP) and when this limit is reached Duplicity can't check existing backups and all new backups are full (even when Duplicity is configured to create incremental backups if last full isn't so old). This limitation in the backend can be bypassed if duplicity could use subfolders to store its data. For instance one folder with the manifest files and then volumes for each backup in their own folders. In some ftp backends this couldn't be enough, and a bigger default volume size may come in handy. ```",6 118022861,2015-05-20 19:02:03.110,Deja-dup fails backing up (lp:#1457185),"[Original report](https://bugs.launchpad.net/bugs/1457185) created by **Wolf Rogner (war-rsb)** ``` Deja-dup fails with: Failed to read /tmp/duplicity-dVcqOt-tempdir/mktemp-DUF_rW-325: (, IOError('CRC check failed 0xd1a0d155 != 0x4c0714ccL',), ) This is an error already reported in 2010 somewhere else. tried to mitigate by reinstalling, clearing caches and parameters to no avail ProblemType: Bug DistroRelease: Ubuntu 15.04 Package: deja-dup 32.0-0ubuntu5 ProcVersionSignature: Ubuntu 3.19.0-18.18-generic 3.19.6 Uname: Linux 3.19.0-18-generic x86_64 NonfreeKernelModules: wl ApportVersion: 2.17.2-0ubuntu1 Architecture: amd64 CurrentDesktop: Unity Date: Wed May 20 20:58:37 2015 InstallationDate: Installed on 2013-05-28 (721 days ago) InstallationMedia: Ubuntu 13.04 ""Raring Ringtail"" - Release amd64+mac (20130424) SourcePackage: deja-dup UpgradeStatus: Upgraded to vivid on 2015-04-25 (25 days ago) ``` Original tags: amd64 apport-bug vivid",6 118022975,2015-05-08 11:57:12.827,Backup stops in the middle with a python traceback (lp:#1453114),"[Original report](https://bugs.launchpad.net/bugs/1453114) created by **Shoham Peller (shoham-peller)** ``` Backup Failed: Failed with an unknown error. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1494, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1488, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1337, in main do_backup(action) File ""/usr/bin/duplicity"", line 1458, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 542, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 403, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 327, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 320, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity-x6bEln- tempdir/mktemp-3XerZ6-13' ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: deja-dup 30.0-0ubuntu4 ProcVersionSignature: Ubuntu 3.13.0-39.66-generic 3.13.11.8 Uname: Linux 3.13.0-39-generic x86_64 NonfreeKernelModules: nvidia ApportVersion: 2.14.1-0ubuntu3.10 Architecture: amd64 Date: Fri May 8 14:52:01 2015 InstallationDate: Installed on 2014-11-05 (184 days ago) InstallationMedia: Ubuntu 14.04.1 LTS ""Trusty Tahr"" - Release amd64 (20140722.2) SourcePackage: deja-dup UpgradeStatus: No upgrade log present (probably fresh install) ``` Original tags: amd64 apport-bug trusty",6 118022963,2015-05-01 04:18:55.228,Unknown Error during backup (lp:#1450702),"[Original report](https://bugs.launchpad.net/bugs/1450702) created by **Michael Millthorn (millthorn)** ``` Ok, updated Ubuntu (15.0.4 64 bit) today and the backups stopped working. Backup Failed. Failed with an unknown error. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1500, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1494, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1343, in main do_backup(action) File ""/usr/bin/duplicity"", line 1476, in do_backup incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 631, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 405, in write_multivol vi.set_hash(""SHA1"", gpg.get_hash(""SHA1"", tdp)) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 406, in get_hash fp = path.open(""rb"") File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 551, in open result = open(self.name, mode) IOError: [Errno 2] No such file or directory: '/tmp/duplicity-iDn9gh- tempdir/mktemp-AVmK4V-194' ---- org.gnome.DejaDup last-restore '' org.gnome.DejaDup periodic true org.gnome.DejaDup full-backup-period 90 org.gnome.DejaDup backend 'file' org.gnome.DejaDup last-run '2015-04-29T23:43:29.114518Z' org.gnome.DejaDup nag-check '2015-03-04T18:42:07.498008Z' org.gnome.DejaDup prompt-check '2013-11-01T04:13:02.518780Z' org.gnome.DejaDup root-prompt true org.gnome.DejaDup include-list ['$HOME', '/home/michael/Data'] org.gnome.DejaDup exclude-list ['/home/michael/.local/share/Trash'] org.gnome.DejaDup last-backup '2015-04-29T23:43:29.114518Z' org.gnome.DejaDup periodic-period 1 org.gnome.DejaDup delete-after 365 org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder 'ATS-LWS-001u' org.gnome.DejaDup.GDrive email '' org.gnome.DejaDup.GDrive folder '/deja-dup/ATS-LWS-001u' org.gnome.DejaDup.Rackspace username '' org.gnome.DejaDup.Rackspace container 'ATS-LWS-001u' org.gnome.DejaDup.File path 'sftp://' org.gnome.DejaDup.File short-name 'Backup' org.gnome.DejaDup.File uuid '2CDEA1F0DEA1B30E' org.gnome.DejaDup.File icon '. GThemedIcon drive-harddisk-usb drive- harddisk drive' org.gnome.DejaDup.File relpath b'LocalBackup' org.gnome.DejaDup.File name 'TOSHIBA External USB 3.0: Backup' org.gnome.DejaDup.File type 'volume' --- Distributor ID: Ubuntu Description: Ubuntu 15.04 Release: 15.04 Codename: vivid --- ```",6 118021042,2015-04-27 14:20:23.305,check_common_error is too permissive (lp:#1449057),"[Original report](https://bugs.launchpad.net/bugs/1449057) created by **David Coppit (coppit)** ``` I'm running rdiffdir, which is failing with errors like this: Error ‘[Errno 17] File exists' processing . Error '[Errno 1] Operation not permitted: './duplicity_temp.1'' processing . This bug report isn't about those. When this sort of failure happens, a dupliciity_temp.# file is left around, and the program exits with a ""0"" exit code. I suppose the reason is that these sorts of problems are hidden by robust.check_common_error(). This failure mode is particularly nasty because the original (unpatched) file is still there, which could cause one to think that it's patched when it's not. Can you change robust.check_common_error() to not hide IOErrors? ```",6 118023097,2015-04-16 21:40:19.531,Duplicity crashes when verifying (lp:#1445229),"[Original report](https://bugs.launchpad.net/bugs/1445229) created by **Mattias Månsson (mattias-mansson)** ``` Done a backup on my server, it's mostly photos. First I just did a full backup of some folder, then added the rest with incremental. After completion I tried a verify on my pictures folder just to check that it worked. After a long time I get this error: # duplicity verify --file-to-restore pub/pictures dpbx:/// /pub/pictures Duplicity 0.6 series is being deprecated: See http://www.nongnu.org/duplicity/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Tue Apr 14 21:34:46 2015 GnuPG passphrase: Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1509, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1503, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1352, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1439, in do_backup verify(col_stats) File ""/usr/local/bin/duplicity"", line 827, in verify for backup_ropath, current_path in collated: File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 271, in collate2iters relem1 = riter1.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 499, in integrate_patch_iters for patch_seq in collated: File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 379, in yield_tuples setrorps( overflow, elems ) File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 368, in setrorps elems[i] = iter_list[i].next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 99, in filter_path_iter for path in path_iter: File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 119, in difftar2path_iter multivol_fileobj.close() # aborting in middle of multivol File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 239, in close if not self.addtobuffer(): File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 227, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 334, in next self.set_tarfile() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 323, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/local/bin/duplicity"", line 729, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 636 Duplicity 0.6.25 Python 2.7.3 Ubuntu Server 12.04.5 x86_64 Filesystem: ext4 on mdraid ```",16 118021035,2015-04-10 10:39:40.777,documentation to connect to European S3 bucket is wrong/misleading (lp:#1442571),"[Original report](https://bugs.launchpad.net/bugs/1442571) created by **az (az-debian)** ``` this is a forward of debian bug 782238, which lives here: https://bugs.debian.org/782238 a user has complained that the documentation for using european s3 buckets doesn't match reality. i don't use s3 so i have no idea whether this is a valid issue or not. ```",6 118019349,2015-04-01 15:51:26.668,Request: Webdav should support digest auth method (lp:#1439286),"[Original report](https://bugs.launchpad.net/bugs/1439286) created by **jcard (joao-fs-cardoso)** ``` When using duplicity 0.6.24 (or 0.6.25) under python 2.7.8 on openSUSE-13.2 with lighttpd-1.4.35 as a webdav server on a NAS I get the error: PASSPHRASE=xxx FTP_PASSWORD=yyy duplicity --gpg-options=""-z=0"" -v9 Music/Miles\ Davis/ webdav://jcard@dns-325:8080/webdav Using archive dir: /home/jcard/.cache/duplicity/7e46105d7678fc1c7f37bccec3a33579 Using backup name: 7e46105d7678fc1c7f37bccec3a33579 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Using WebDAV protocol http Using WebDAV host dns-325 port 8080 Using WebDAV directory /webdav/ Main action: inc ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/bin/duplicity --gpg-options=-z=0 -v9 Music/Miles Davis/ webdav://jcard@dns-325:8080/webdav Linux silver 3.16.7-7-desktop #1 SMP PREEMPT Wed Dec 17 18:00:44 UTC 2014 (762f27a) x86_64 x86_64 /usr/bin/python 2.7.8 (default, Sep 30 2014, 15:34:38) [GCC] ================================================================================ Using temporary directory /tmp/duplicity-U9FkzT-tempdir Registering (mkstemp) temporary file /tmp/duplicity-U9FkzT- tempdir/mkstemp-9AspmZ-1 Temp has 9271398400 available, backup will use approx 34078720. Listing directory /webdav/ on WebDAV server WebDAV create connection on 'dns-325' (retry 1) WebDAV PROPFIND /webdav/ request with headers: {'Connection': 'keep-alive', 'Depth': '1'} WebDAV data length: 0 WebDAV response status 401 with reason 'Unauthorized'. WebDAV retry request with authentification headers. WebDAV PROPFIND /webdav/ request2 with headers: {'Connection': 'keep- alive', 'Depth': '1', 'Authorization': 'Digest None'} WebDAV data length: 0 WebDAV response2 status 400 with reason 'Bad Request'. Attempt 1 failed. BackendException: Bad status code 400 reason Bad Request. Backtrace of previous error: Traceback (innermost last):   File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 344, in _retry_fatal     return fn(self, *args)   File ""/usr/lib64/python2.7/site- packages/duplicity/backends/webdavbackend.py"", line 300, in _list     raise e  BackendException: Bad status code 400 reason Bad Request. The lighttpd error log shows 2015-04-01 16:13:23: (http_auth.c.1038) digest: missing field The same error appears even when using webdav://user:password@host:port instead of defining the FTP_PASSWORD variable. If however I use '--gio dav://' it succeeds without errors on the same server. The same webdav:// error happens for 0.6.25 on a NAS with uclibc and a cross compiled python-2.7.2 and extensions, but can't try the --gio option, as I think that it makes no sense to have the gio backend on such platform. All other backends (ftp/sftp/ftps/ssh/scp/rsync/gdocs/dpbx/s3) works OK on this platform. Probably unrelated -- for file: and s3: duplicity has to run with --no- encryption, or a ""File ....difftar.gpg was corrupted during upload."" error appears, although I can gpg decrypt and untar it. The final purpose is to provide a duplicity package for the NAS running Alt-F, a ""free alternative firmware for the DLink DNS-320/320L/321/323/325"", https://sourceforge.net/projects/alt-f Thanks ```",6 118021032,2015-03-19 11:44:10.435,On successful restore duplicity exits with (-1) (lp:#1434041),"[Original report](https://bugs.launchpad.net/bugs/1434041) created by **Kenneth Loafman (kenneth-loafman)** ``` From: https://answers.launchpad.net/duplicity/+question/262087 We've got some automation built up around duplicity, but we're having some occasional trouble with duplicity exiting with code -1 during a restore. eg: Writing foo/bar.jpg of type reg Deleting /var/lib/tmp/duplicity-GYz7n0-tempdir/mktemp-yHEK5p-7 Processed volume 6 of 49 Ended with non-successful exit code (-1) With the last line of output originating from our system. I've looked at the pydoc for the ErrorCode class, and it explicitly states that negative return codes are not to be used. The best I've been able to turn up is that Python subprocesses will return a negative status reflecting the signal they were sent, -1 correlating to SIGHUP. Investigating this further I've found that duplicity.pexpect.run() appears to be calling the close() function on child processes which issues a SIGHUP, and then returning the child's status. Is this an expected behaviour? Can we safely assume that a -1 exit status is OK from duplicity? This is pure inference, but is duplicity issuing an early close() on the subprocess handling the restore once it has restored the necessary files, and then the -1 return code gets passed up through the stack until duplicity exits? ```",8 118021029,2015-03-17 07:57:18.213,Unknown responses not handled by duplicity (lp:#1432981),"[Original report](https://bugs.launchpad.net/bugs/1432981) created by **Samuel Bancal (samuel-bancal)** ``` As described in bug #1431019, we encountered a bug which led us to have no incremental, but only full backups saved. The solution has been to switch from scp:// to sftp:// , but this revealed an unexpected behavior of duplicity: For several months, these backups completed with exit code 0. Which led the admins to think the backups went smoothly. Could duplicity handle unknown responses in a manner that it returns an error code? Duplicity version : 0.7.02 Python version : 2.7.3 OS : Ubuntu 12.04.5 LTS Target Filesystem : NFS through SSH (Ubuntu 12.04.5 LTS) ```",6 118023150,2015-03-12 14:38:23.015,Errno 84 Invalid or incomplete multibyte or wide character (lp:#1431394),"[Original report](https://bugs.launchpad.net/bugs/1431394) created by **dospaness (dospaness)** ``` I try to restore a backup to an external Hard-drive (USB) that already contains the encrypted backup-files since I have no space on my internal hardrive. I have a similar issue as described here: http://askubuntu.com/questions/448803/deja-dup-gives-invalid-or-incomplete- multibyte-or-wide-character-when-attempti and I can't figure out how to resolve it in order to restore my backup. lsb_release -d Description: Ubuntu 14.04.2 LTS dpkg-query -W deja-dup duplicity deja-dup 30.0-0ubuntu4 duplicity 0.6.23-1ubuntu4.1 The debugging files come attached below. The error message that appears on the command line is: OSError: [Errno 84] Invalid or incomplete multibyte or wide character: '/media/user/Backups_Envpolicies/restore_test_2015-03-06/home/user/2_data/geodata/climate/ibge/Zona Econ\xf4mica Exclusiva _Lei.dbf' It seems that deja dup has a problem to restore files that have a unrecognized or invalid encoding. In my case it is group of files from Brazil and the files came from MS Windows users. So it is probably encoded in some strange way that nautlius does not recognize (please see the screenshot attached). So instead ofskipping the corrupt file and continuing to restore the rest, deja dup stops the restoring process completly... ```",12 118021025,2015-03-09 08:00:38.539,--file-to-restore fail to restore file/dir with accentued characters in their name (lp:#1429741),"[Original report](https://bugs.launchpad.net/bugs/1429741) created by **franckbonin (franck-bonin)** ``` duplicity v 0.7.01 python v2.7.9 distribution : macports (v2.3.3) over OS X 10.6.8 filesystem : HFS+ failure example : > duplicity restore --file-to-restore Documents/chorégraphie\ patin\ 2013.odt --verbosity 9 cf+hubic://xxxxxxx ../Shared/Restored/ Utilisation du répertoire d’archive : /Users/franckbonin/.cache/duplicity/2d8531d580da25d1b8c18e8b8d4f24e1 Utilisation du nom de sauvegarde : 2d8531d580da25d1b8c18e8b8d4f24e1 Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Failed: No module named requests_oauthlib Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/novaclient/v1_1/__init__.py:30: UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for novaclient.v2). The preferable way to get client class or object you can find in novaclient.client module. warnings.warn(""Module novaclient.v1_1 is deprecated (taken as a basis for "" Main action: restore ================================================================================ duplicity 0.7.01 (January 11, 2015) Args: /opt/local/bin/duplicity restore --file-to-restore Documents/chorégraphie patin 2013.odt --verbosity 9 cf+hubic://franckbonin ../Shared/Restored/ Darwin iMac-intel-de-Franck-Bonin.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 i386 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python 2.7.9 (default, Dec 10 2014, 23:59:36) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] ================================================================================ Using temporary directory /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir Registering (mkstemp) temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mkstemp-FaEDHI-1 Il y a 829892550656 d’espace temporaire disponible, la sauvegarde va nécessiter environ 34078720. Les métadonnées locales et distantes sont déjà synchronisées. Aucune synchronisation nécessaire. 189 fichiers existent sur le serveur central 7 fichiers existent dans le cache Extraction des chaînes de sauvegarde depuis la liste des fichiers : [u'duplicity-full-signatures.20150308T084916Z.sigtar.gpg', u'duplicity- full.20150308T084916Z.manifest.gpg', u'duplicity- full.20150308T084916Z.vol1.difftar.gpg', u'duplicity- full.20150308T084916Z.vol10.difftar.gpg', u'duplicity- full.20150308T084916Z.vol100.difftar.gpg', u'duplicity- full.20150308T084916Z.vol101.difftar.gpg', u'duplicity- full.20150308T084916Z.vol102.difftar.gpg', u'duplicity- full.20150308T084916Z.vol103.difftar.gpg', u'duplicity- full.20150308T084916Z.vol104.difftar.gpg', u'duplicity- full.20150308T084916Z.vol105.difftar.gpg', u'duplicity- full.20150308T084916Z.vol106.difftar.gpg', u'duplicity- full.20150308T084916Z.vol107.difftar.gpg', u'duplicity- full.20150308T084916Z.vol108.difftar.gpg', u'duplicity- full.20150308T084916Z.vol109.difftar.gpg', u'duplicity- full.20150308T084916Z.vol11.difftar.gpg', u'duplicity- full.20150308T084916Z.vol110.difftar.gpg', u'duplicity- full.20150308T084916Z.vol111.difftar.gpg', u'duplicity- full.20150308T084916Z.vol112.difftar.gpg', u'duplicity- full.20150308T084916Z.vol113.difftar.gpg', u'duplicity- full.20150308T084916Z.vol114.difftar.gpg', u'duplicity- full.20150308T084916Z.vol115.difftar.gpg', u'duplicity- full.20150308T084916Z.vol116.difftar.gpg', u'duplicity- full.20150308T084916Z.vol117.difftar.gpg', u'duplicity- full.20150308T084916Z.vol118.difftar.gpg', u'duplicity- full.20150308T084916Z.vol119.difftar.gpg', u'duplicity- full.20150308T084916Z.vol12.difftar.gpg', u'duplicity- full.20150308T084916Z.vol120.difftar.gpg', u'duplicity- full.20150308T084916Z.vol121.difftar.gpg', u'duplicity- full.20150308T084916Z.vol122.difftar.gpg', u'duplicity- full.20150308T084916Z.vol123.difftar.gpg', u'duplicity- full.20150308T084916Z.vol124.difftar.gpg', u'duplicity- full.20150308T084916Z.vol125.difftar.gpg', u'duplicity- full.20150308T084916Z.vol126.difftar.gpg', u'duplicity- full.20150308T084916Z.vol127.difftar.gpg', u'duplicity- full.20150308T084916Z.vol128.difftar.gpg', u'duplicity- full.20150308T084916Z.vol129.difftar.gpg', u'duplicity- full.20150308T084916Z.vol13.difftar.gpg', u'duplicity- full.20150308T084916Z.vol130.difftar.gpg', u'duplicity- full.20150308T084916Z.vol131.difftar.gpg', u'duplicity- full.20150308T084916Z.vol132.difftar.gpg', u'duplicity- full.20150308T084916Z.vol133.difftar.gpg', u'duplicity- full.20150308T084916Z.vol134.difftar.gpg', u'duplicity- full.20150308T084916Z.vol135.difftar.gpg', u'duplicity- full.20150308T084916Z.vol136.difftar.gpg', u'duplicity- full.20150308T084916Z.vol137.difftar.gpg', u'duplicity- full.20150308T084916Z.vol138.difftar.gpg', u'duplicity- full.20150308T084916Z.vol139.difftar.gpg', u'duplicity- full.20150308T084916Z.vol14.difftar.gpg', u'duplicity- full.20150308T084916Z.vol140.difftar.gpg', u'duplicity- full.20150308T084916Z.vol141.difftar.gpg', u'duplicity- full.20150308T084916Z.vol142.difftar.gpg', u'duplicity- full.20150308T084916Z.vol143.difftar.gpg', u'duplicity- full.20150308T084916Z.vol144.difftar.gpg', u'duplicity- full.20150308T084916Z.vol145.difftar.gpg', u'duplicity- full.20150308T084916Z.vol146.difftar.gpg', u'duplicity- full.20150308T084916Z.vol147.difftar.gpg', u'duplicity- full.20150308T084916Z.vol148.difftar.gpg', u'duplicity- full.20150308T084916Z.vol149.difftar.gpg', u'duplicity- full.20150308T084916Z.vol15.difftar.gpg', u'duplicity- full.20150308T084916Z.vol150.difftar.gpg', u'duplicity- full.20150308T084916Z.vol151.difftar.gpg', u'duplicity- full.20150308T084916Z.vol152.difftar.gpg', u'duplicity- full.20150308T084916Z.vol153.difftar.gpg', u'duplicity- full.20150308T084916Z.vol154.difftar.gpg', u'duplicity- full.20150308T084916Z.vol155.difftar.gpg', u'duplicity- full.20150308T084916Z.vol156.difftar.gpg', u'duplicity- full.20150308T084916Z.vol157.difftar.gpg', u'duplicity- full.20150308T084916Z.vol158.difftar.gpg', u'duplicity- full.20150308T084916Z.vol159.difftar.gpg', u'duplicity- full.20150308T084916Z.vol16.difftar.gpg', u'duplicity- full.20150308T084916Z.vol160.difftar.gpg', u'duplicity- full.20150308T084916Z.vol161.difftar.gpg', u'duplicity- full.20150308T084916Z.vol162.difftar.gpg', u'duplicity- full.20150308T084916Z.vol163.difftar.gpg', u'duplicity- full.20150308T084916Z.vol164.difftar.gpg', u'duplicity- full.20150308T084916Z.vol165.difftar.gpg', u'duplicity- full.20150308T084916Z.vol166.difftar.gpg', u'duplicity- full.20150308T084916Z.vol167.difftar.gpg', u'duplicity- full.20150308T084916Z.vol168.difftar.gpg', u'duplicity- full.20150308T084916Z.vol169.difftar.gpg', u'duplicity- full.20150308T084916Z.vol17.difftar.gpg', u'duplicity- full.20150308T084916Z.vol170.difftar.gpg', u'duplicity- full.20150308T084916Z.vol171.difftar.gpg', u'duplicity- full.20150308T084916Z.vol172.difftar.gpg', u'duplicity- full.20150308T084916Z.vol173.difftar.gpg', u'duplicity- full.20150308T084916Z.vol174.difftar.gpg', u'duplicity- full.20150308T084916Z.vol175.difftar.gpg', u'duplicity- full.20150308T084916Z.vol176.difftar.gpg', u'duplicity- full.20150308T084916Z.vol177.difftar.gpg', u'duplicity- full.20150308T084916Z.vol178.difftar.gpg', u'duplicity- full.20150308T084916Z.vol179.difftar.gpg', u'duplicity- full.20150308T084916Z.vol18.difftar.gpg', u'duplicity- full.20150308T084916Z.vol180.difftar.gpg', u'duplicity- full.20150308T084916Z.vol181.difftar.gpg', u'duplicity- full.20150308T084916Z.vol19.difftar.gpg', u'duplicity- full.20150308T084916Z.vol2.difftar.gpg', u'duplicity- full.20150308T084916Z.vol20.difftar.gpg', u'duplicity- full.20150308T084916Z.vol21.difftar.gpg', u'duplicity- full.20150308T084916Z.vol22.difftar.gpg', u'duplicity- full.20150308T084916Z.vol23.difftar.gpg', u'duplicity- full.20150308T084916Z.vol24.difftar.gpg', u'duplicity- full.20150308T084916Z.vol25.difftar.gpg', u'duplicity- full.20150308T084916Z.vol26.difftar.gpg', u'duplicity- full.20150308T084916Z.vol27.difftar.gpg', u'duplicity- full.20150308T084916Z.vol28.difftar.gpg', u'duplicity- full.20150308T084916Z.vol29.difftar.gpg', u'duplicity- full.20150308T084916Z.vol3.difftar.gpg', u'duplicity- full.20150308T084916Z.vol30.difftar.gpg', u'duplicity- full.20150308T084916Z.vol31.difftar.gpg', u'duplicity- full.20150308T084916Z.vol32.difftar.gpg', u'duplicity- full.20150308T084916Z.vol33.difftar.gpg', u'duplicity- full.20150308T084916Z.vol34.difftar.gpg', u'duplicity- full.20150308T084916Z.vol35.difftar.gpg', u'duplicity- full.20150308T084916Z.vol36.difftar.gpg', u'duplicity- full.20150308T084916Z.vol37.difftar.gpg', u'duplicity- full.20150308T084916Z.vol38.difftar.gpg', u'duplicity- full.20150308T084916Z.vol39.difftar.gpg', u'duplicity- full.20150308T084916Z.vol4.difftar.gpg', u'duplicity- full.20150308T084916Z.vol40.difftar.gpg', u'duplicity- full.20150308T084916Z.vol41.difftar.gpg', u'duplicity- full.20150308T084916Z.vol42.difftar.gpg', u'duplicity- full.20150308T084916Z.vol43.difftar.gpg', u'duplicity- full.20150308T084916Z.vol44.difftar.gpg', u'duplicity- full.20150308T084916Z.vol45.difftar.gpg', u'duplicity- full.20150308T084916Z.vol46.difftar.gpg', u'duplicity- full.20150308T084916Z.vol47.difftar.gpg', u'duplicity- full.20150308T084916Z.vol48.difftar.gpg', u'duplicity- full.20150308T084916Z.vol49.difftar.gpg', u'duplicity- full.20150308T084916Z.vol5.difftar.gpg', u'duplicity- full.20150308T084916Z.vol50.difftar.gpg', u'duplicity- full.20150308T084916Z.vol51.difftar.gpg', u'duplicity- full.20150308T084916Z.vol52.difftar.gpg', u'duplicity- full.20150308T084916Z.vol53.difftar.gpg', u'duplicity- full.20150308T084916Z.vol54.difftar.gpg', u'duplicity- full.20150308T084916Z.vol55.difftar.gpg', u'duplicity- full.20150308T084916Z.vol56.difftar.gpg', u'duplicity- full.20150308T084916Z.vol57.difftar.gpg', u'duplicity- full.20150308T084916Z.vol58.difftar.gpg', u'duplicity- full.20150308T084916Z.vol59.difftar.gpg', u'duplicity- full.20150308T084916Z.vol6.difftar.gpg', u'duplicity- full.20150308T084916Z.vol60.difftar.gpg', u'duplicity- full.20150308T084916Z.vol61.difftar.gpg', u'duplicity- full.20150308T084916Z.vol62.difftar.gpg', u'duplicity- full.20150308T084916Z.vol63.difftar.gpg', u'duplicity- full.20150308T084916Z.vol64.difftar.gpg', u'duplicity- full.20150308T084916Z.vol65.difftar.gpg', u'duplicity- full.20150308T084916Z.vol66.difftar.gpg', u'duplicity- full.20150308T084916Z.vol67.difftar.gpg', u'duplicity- full.20150308T084916Z.vol68.difftar.gpg', u'duplicity- full.20150308T084916Z.vol69.difftar.gpg', u'duplicity- full.20150308T084916Z.vol7.difftar.gpg', u'duplicity- full.20150308T084916Z.vol70.difftar.gpg', u'duplicity- full.20150308T084916Z.vol71.difftar.gpg', u'duplicity- full.20150308T084916Z.vol72.difftar.gpg', u'duplicity- full.20150308T084916Z.vol73.difftar.gpg', u'duplicity- full.20150308T084916Z.vol74.difftar.gpg', u'duplicity- full.20150308T084916Z.vol75.difftar.gpg', u'duplicity- full.20150308T084916Z.vol76.difftar.gpg', u'duplicity- full.20150308T084916Z.vol77.difftar.gpg', u'duplicity- full.20150308T084916Z.vol78.difftar.gpg', u'duplicity- full.20150308T084916Z.vol79.difftar.gpg', u'duplicity- full.20150308T084916Z.vol8.difftar.gpg', u'duplicity- full.20150308T084916Z.vol80.difftar.gpg', u'duplicity- full.20150308T084916Z.vol81.difftar.gpg', u'duplicity- full.20150308T084916Z.vol82.difftar.gpg', u'duplicity- full.20150308T084916Z.vol83.difftar.gpg', u'duplicity- full.20150308T084916Z.vol84.difftar.gpg', u'duplicity- full.20150308T084916Z.vol85.difftar.gpg', u'duplicity- full.20150308T084916Z.vol86.difftar.gpg', u'duplicity- full.20150308T084916Z.vol87.difftar.gpg', u'duplicity- full.20150308T084916Z.vol88.difftar.gpg', u'duplicity- full.20150308T084916Z.vol89.difftar.gpg', u'duplicity- full.20150308T084916Z.vol9.difftar.gpg', u'duplicity- full.20150308T084916Z.vol90.difftar.gpg', u'duplicity- full.20150308T084916Z.vol91.difftar.gpg', u'duplicity- full.20150308T084916Z.vol92.difftar.gpg', u'duplicity- full.20150308T084916Z.vol93.difftar.gpg', u'duplicity- full.20150308T084916Z.vol94.difftar.gpg', u'duplicity- full.20150308T084916Z.vol95.difftar.gpg', u'duplicity- full.20150308T084916Z.vol96.difftar.gpg', u'duplicity- full.20150308T084916Z.vol97.difftar.gpg', u'duplicity- full.20150308T084916Z.vol98.difftar.gpg', u'duplicity- full.20150308T084916Z.vol99.difftar.gpg', u'duplicity- inc.20150308T084916Z.to.20150309T061131Z.manifest.gpg', u'duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol1.difftar.gpg', u'duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol2.difftar.gpg', u'duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol3.difftar.gpg', u'duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol4.difftar.gpg', u'duplicity- new-signatures.20150308T084916Z.to.20150309T061131Z.sigtar.gpg'] Le fichier duplicity-full-signatures.20150308T084916Z.sigtar.gpg ne fait pas partie d’un jeu connu ; création d’un nouveau jeu Fichier ignoré (rejeté par le jeu de sauvegarde) « duplicity-full- signatures.20150308T084916Z.sigtar.gpg » Le fichier duplicity-full.20150308T084916Z.manifest.gpg ne fait pas partie d’un jeu connu ; création d’un nouveau jeu Le fichier duplicity-full.20150308T084916Z.vol1.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol10.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol100.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol101.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol102.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol103.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol104.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol105.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol106.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol107.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol108.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol109.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol11.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol110.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol111.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol112.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol113.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol114.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol115.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol116.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol117.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol118.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol119.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol12.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol120.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol121.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol122.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol123.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol124.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol125.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol126.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol127.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol128.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol129.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol13.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol130.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol131.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol132.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol133.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol134.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol135.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol136.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol137.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol138.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol139.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol14.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol140.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol141.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol142.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol143.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol144.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol145.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol146.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol147.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol148.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol149.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol15.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol150.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol151.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol152.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol153.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol154.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol155.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol156.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol157.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol158.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol159.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol16.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol160.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol161.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol162.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol163.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol164.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol165.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol166.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol167.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol168.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol169.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol17.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol170.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol171.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol172.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol173.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol174.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol175.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol176.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol177.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol178.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol179.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol18.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol180.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol181.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol19.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol2.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol20.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol21.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol22.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol23.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol24.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol25.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol26.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol27.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol28.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol29.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol3.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol30.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol31.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol32.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol33.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol34.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol35.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol36.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol37.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol38.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol39.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol4.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol40.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol41.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol42.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol43.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol44.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol45.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol46.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol47.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol48.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol49.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol5.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol50.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol51.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol52.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol53.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol54.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol55.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol56.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol57.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol58.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol59.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol6.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol60.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol61.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol62.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol63.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol64.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol65.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol66.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol67.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol68.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol69.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol7.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol70.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol71.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol72.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol73.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol74.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol75.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol76.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol77.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol78.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol79.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol8.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol80.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol81.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol82.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol83.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol84.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol85.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol86.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol87.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol88.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol89.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol9.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol90.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol91.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol92.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol93.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol94.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol95.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol96.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol97.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol98.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-full.20150308T084916Z.vol99.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-inc.20150308T084916Z.to.20150309T061131Z.manifest.gpg ne fait pas partie d’un jeu connu ; création d’un nouveau jeu Le fichier duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol1.difftar.gpg fait partie d’un jeu connu Le fichier duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol2.difftar.gpg fait partie d’un jeu connu Le fichier duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol3.difftar.gpg fait partie d’un jeu connu Le fichier duplicity- inc.20150308T084916Z.to.20150309T061131Z.vol4.difftar.gpg fait partie d’un jeu connu Le fichier duplicity-new- signatures.20150308T084916Z.to.20150309T061131Z.sigtar.gpg ne fait pas partie d’un jeu connu ; création d’un nouveau jeu Fichier ignoré (rejeté par le jeu de sauvegarde) « duplicity-new- signatures.20150308T084916Z.to.20150309T061131Z.sigtar.gpg » Chaîne de sauvegarde [Sun Mar 8 09:49:16 2015]-[Sun Mar 8 09:49:16 2015] trouvée Jeu de sauvegarde incrémentale ajouté (date_début : Sun Mar 8 09:49:16 2015 ; date_fin : Mon Mar 9 07:11:31 2015) Jeu Mon Mar 9 07:11:31 2015 ajouté à la chaîne préexistante [Sun Mar 8 09:49:16 2015]-[Mon Mar 9 07:11:31 2015] Date de dernière sauvegarde intégrale : Sun Mar 8 09:49:16 2015 État de la collection ----------------- Connexion au serveur central : BackendWrapper Dossier d’archive : /Users/franckbonin/.cache/duplicity/2d8531d580da25d1b8c18e8b8d4f24e1 0 chaîne secondaire de sauvegarde a été trouvée. Chaîne primaire de sauvegarde trouvée avec la signature de chaîne correspondante : ------------------------- Date de début de chaîne : Sun Mar 8 09:49:16 2015 Date de fin de chaîne : Mon Mar 9 07:11:31 2015 Nombre de jeux de sauvegarde contenus : 2 Nombre total de volumes contenus : 185 Type de jeu de sauvegarde : Date : Nombre de volumes : Intégrale Sun Mar 8 09:49:16 2015 181 Incrémentale Mon Mar 9 07:11:31 2015 4 ------------------------- Aucun jeu orphelin ou incomplet de sauvegarde n’a été trouvé. PASSPHRASE variable not set, asking user. GnuPG passphrase: Registering (mktemp) temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-l1jFcr-2 Deleting /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-l1jFcr-2 Forgetting temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-l1jFcr-2 Le volume 1 sur 185 a été traité Registering (mktemp) temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-oPL2ec-3 Deleting /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-oPL2ec-3 Forgetting temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mktemp-oPL2ec-3 Le volume 2 sur 185 a été traité Documents/chorégraphie patin 2013.odt n’a pas été trouvé dans l’archive, aucun fichier restauré. Releasing lockfile Removing still remembered temporary file /var/folders/0R/0Rq36y37Gs45ut- SDJmcAk+++TI/-Tmp-/duplicity-0cdldC-tempdir/mkstemp-FaEDHI-1 ```",24 118021023,2015-02-09 06:09:26.719,gpg asks for password (lp:#1419618),"[Original report](https://bugs.launchpad.net/bugs/1419618) created by **Jon (jonablett)** ``` Using Duplicity 0.7.01 Using the --encrypt-key [key] option should avoid being prompted for the password. I am getting a pinentry dialogue asking for the password (either with or without --use-agent). This might be related to a recent upgrade to GnuPG 2.1, not sure. It doesn't matter if I enter a correct or incorrect password (or even blank). It still proceeds to back up. My concern is that I want it to run automatically in the background but I can't with pinentry stopping the process. ```",6 118021018,2015-02-05 13:11:35.522,Misleading start time reported in statistics (lp:#1418543),"[Original report](https://bugs.launchpad.net/bugs/1418543) created by **Phill (phill.l)** ``` The statistics section reports a start and end time, however, the start time on my machine is always over 5 minutes after the actual start time. The section below shows a log file from a script that logs the current time, immediately runs duplicity and then logs the end time. During the 5+ minutes not included in the statistics the CPU usage by duplicity (as shown by `top`) is extremely high. -- Backup Started Thu Feb 5 09:00:01 UTC 2015 Import of duplicity.backends.giobackend Failed: No module named gio Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Jan 29 16:00:04 2015 --------------[ Backup Statistics ]-------------- StartTime 1423127161.48 (Thu Feb 5 09:06:01 2015) EndTime 1423127165.40 (Thu Feb 5 09:06:05 2015) ElapsedTime 3.92 (3.92 seconds) SourceFiles 799 SourceFileSize 514518209 (491 MB) NewFiles 0 NewFileSize 0 (0 bytes) DeletedFiles 0 ChangedFiles 1 ChangedFileSize 1966187 (1.88 MB) ChangedDeltaSize 0 (0 bytes) DeltaEntries 1 RawDeltaSize 14530 (14.2 KB) TotalDestinationSizeChange 1467 (1.43 KB) Errors 9 ------------------------------------------------- Backup Ended Thu Feb 5 09:06:06 UTC 2015 --- Package: duplicity Status: install ok installed Priority: optional Section: utils Installed-Size: 1028 Maintainer: Ubuntu Developers Architecture: amd64 Version: 0.6.18-0ubuntu3.5 Depends: libc6 (>= 2.4), librsync1 (>= 0.9.6), python2.7, python (>= 2.7.1-0ubuntu2), python (<< 2.8), python-gnupginterface (>= 0.3.2-9.1), python-lockfile Suggests: python-boto, ncftp, rsync, ssh, python-paramiko Breaks: deja-dup (<< 22.0-0ubuntu5) Description: encrypted bandwidth-efficient backup  Duplicity backs directories by producing encrypted tar-format volumes  and uploading them to a remote or local file server. Because duplicity  uses librsync, the incremental archives are space efficient and only  record the parts of files that have changed since the last backup.  Because duplicity uses GnuPG to encrypt and/or sign these archives, they  will be safe from spying and/or modification by the server. Homepage: http://duplicity.nongnu.org/ Original-Maintainer: Alexander Zangerl ```",6 118019340,2015-01-28 20:10:12.243,Option to specify changed files instead of scanning filesystem (lp:#1415621),"[Original report](https://bugs.launchpad.net/bugs/1415621) created by **Aaron Whitehouse (aaron-whitehouse)** ``` From Question #261358 Provide a way to pass a list of created/modified/deleted files to duplicity and avoid duplicity doing a file system scan. This would speed up the backup process. This could take a list generated by fswatch ( http://emcrisostomo.github.io/fswatch/ ) as an input and would allow easier creation of inotify-based backup systems (see, for example, Bug #781428 ). ```",12 118021008,2015-01-17 03:23:38.897,Update man pages for gdocs with example (lp:#1411894),"[Original report](https://bugs.launchpad.net/bugs/1411894) created by **ShadowXVII (thedarkestshadow)** ``` Currently the man pages use the following syntax as an example for gdocs. gdocs://user[:password]@other.host/some_dir It isn't clear that@ other.host should be substituted with@ gmail.com for normal Google accounts, or@ domain.com for Google Apps accounts. Google services usually require a full username (including@ gmail.com suffix) when authenticating to other services, so it was natural to include this in the user field. This causes parsing errors & two factor authentication error messages. The example should provide an explicit@ gmail.com example and show the correct breakdown of user & password in relation to the syntax; e.g. user bob@gmail.com with a password xyz into the directory /Backups/Duplicity; gdocs://bob:xyz@gmail.com/Backups/Duplicity not gdocs://bob@gmail.com:xyz@other.host/some_dir (Duplicity v0.7.0) ``` Original tags: man",6 118020977,2015-01-12 21:06:57.943,Error when parsing exclude filelist (lp:#1409908),"[Original report](https://bugs.launchpad.net/bugs/1409908) created by **Mikhalych (zhupikov)** ``` Duplicity version 0.7.0 OS Ubuntu 14.04.1 Target: ftp (Linux) Hello! If i create full backup with option --exclude-filelist (filelist in attach) duplicity geneate error: WARNING 1 . Import of duplicity.backends.dpbxbackend Failed: No module named dropbox NOTICE 1 . LFTP version is 4.4.13 NOTICE 1 . Чтение списка файлов exclude.txt NOTICE 1 . Сортировка списка файлов exclude.txt NOTICE 1 . Локальные и удалённые метаданные синхронизированы, синхронизация не требуется. NOTICE 1 . Время последней полной резервной копии: нету NOTICE 1 . Чтение списка файлов exclude.txt ERROR 30 ValueError . Traceback (most recent call last): . File ""/usr/bin/duplicity"", line 1500, in . with_tempdir(main) . File ""/usr/bin/duplicity"", line 1494, in with_tempdir . fn() . File ""/usr/bin/duplicity"", line 1343, in main . do_backup(action) . File ""/usr/bin/duplicity"", line 1464, in do_backup . full_backup(col_stats) . File ""/usr/bin/duplicity"", line 522, in full_backup . commandline.set_selection() . File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 914, in set_selection . sel.ParseArgs(select_opts, select_files) . File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 237, in ParseArgs . filelists[filelists_index], 0, arg)) . File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 315, in filelist_get_sf . self.filelist_read(filelist_fp, inc_default, filelist_name) . File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 351, in filelist_read . for line in filelist_fp.read().split(separator): . ValueError: I/O operation on closed file . If i write all of this exclude folders in command line wiyh option --exclude duplicity work fine. Sample Commands: Error: duplicity full --no-encryption --exclude-filelist exclude.txt --ftp- regular --log-file family_backup.log --progress --progress-rate 10 --volsize 700 /home/family/ lftp+ftp://useruser:q1234567@192.168.0.10/sda1/Backup/Family Fine: duplicity full --no-encryption --exclude /home/family/Аудиокниги --exclude /home/family/Музыка --exclude /home/family/Видео --exclude /home/family/Семейный\ архив --ftp-regular --log-file family_backup.log --progress --progress-rate 10 --volsize 700 /home/family/ ftp://useruser:q1234567@192.168.0.10/sda1/Backup/Family ```",16 118020975,2014-12-26 01:34:01.735,IPv6 causes Duplicati to fail with connection refused (lp:#1405705),"[Original report](https://bugs.launchpad.net/bugs/1405705) created by **Ricky-burg (ricky-burg)** ``` Duplicity version: 0.6.24-4.fc21 Python version: 2.7.8-7.fc21 With IPv6 and IPv4 hostname: [rburgin@Ricky-PC ~]$ duplicity -v9 full /home/rburgin/Documents scp://duplicity@private.orbixx.com/Documents Using archive dir: /home/rburgin/.cache/duplicity/79ed15f54c43bfd4b8bb0c0c229c5f47 Using backup name: 79ed15f54c43bfd4b8bb0c0c229c5f47 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded Using temporary directory /tmp/duplicity-N2Bbih-tempdir Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1502, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1496, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1329, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib64/python2.7/site-packages/duplicity/commandline.py"", line 1059, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/lib64/python2.7/site-packages/duplicity/commandline.py"", line 952, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 163, in get_backend return _backends[pu.scheme](pu) File ""/usr/lib64/python2.7/site- packages/duplicity/backends/_ssh_paramiko.py"", line 219, in __init__ self.config['port'],e)) BackendException: ssh connection to duplicity@private.orbixx.com:22 failed: [Errno 111] Connection refused BackendException: ssh connection to duplicity@private.orbixx.com:22 failed: [Errno 111] Connection refused With IPv4 (same host): [rburgin@Ricky-PC ~]$ duplicity -v9 full /home/rburgin/Documents ssh://duplicity@78.143.255.136/Documents Using archive dir: /home/rburgin/.cache/duplicity/2ce2771e0b9865a5d162e53ca22a5987 Using backup name: 2ce2771e0b9865a5d162e53ca22a5987 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded ssh: starting thread (client mode): 0xb0b16f10L ssh: Connected (version 2.0, client OpenSSH_6.4) ssh: kex algos:[u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh- sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman- group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman- group1-sha1'] server key:[u'ssh-rsa', u'ecdsa-sha2-nistp256'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Trying SSH agent key 9516223a1b48a28b1990acb9431bcaac ssh: userauth is OK ssh: Authentication (publickey) failed. ssh: Trying SSH agent key 075fb8fef979f051e824e8a7cfdce59b ssh: userauth is OK ssh: Authentication (publickey) failed. ssh: Trying SSH agent key cd36d7cde74b97d3e2e87c42f57b2305 ssh: userauth is OK ssh: Authentication (publickey) successful! ssh: [chan 1] Max packet in: 32768 bytes ssh: [chan 1] Max packet out: 32768 bytes ssh: Secsh channel 1 opened. ssh: [chan 1] Sesch channel 1 request ok ssh: [chan 1] Opened sftp connection (server version 3) ssh: [chan 1] stat('Documents') ssh: [chan 1] stat('Documents') ssh: [chan 1] normalize('Documents') Main action: full ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/bin/duplicity -v9 full /home/rburgin/Documents ssh://duplicity@78.143.255.136/Documents Linux Ricky-PC 3.17.7-300.fc21.x86_64 #1 SMP Wed Dec 17 03:08:44 UTC 2014 x86_64 x86_64 /usr/bin/python 2.7.8 (default, Nov 10 2014, 08:19:18) [GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] ================================================================================ Using temporary directory /tmp/duplicity-Gbd4V3-tempdir Registering (mkstemp) temporary file /tmp/duplicity-Gbd4V3-tempdir/mkstemp- TAALWO-1 Temp has 4171616256 available, backup will use approx 34078720. ssh: [chan 1] listdir('/home/duplicity/Documents/.') Local and Remote metadata are synchronized, no sync needed. ssh: [chan 1] listdir('/home/duplicity/Documents/.') 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: SSHParamikoBackend Archive directory: /home/rburgin/.cache/duplicity/2ce2771e0b9865a5d162e53ca22a5987 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: ssh: Sending global request ""keepalive@lag.net"" And any other IPv4-only host should naturally work. ``` Original tags: ipv6",10 118020942,2014-12-25 00:34:56.965,Duplicity hangs after errors (lp:#1405502),"[Original report](https://bugs.launchpad.net/bugs/1405502) created by **E.B. (emailbuilder88)** ``` Duplicity has started hanging after errors on two separate machines. Both machines are: Hand-compiled duplicity 0.6.25 on Ubuntu 12.04 I don't know enough if the errors themselves are related to duplicity hanging or not, so I am including brief -v9 output from both machines. Note the 2nd machine might be choking on an old duplicity manifest file that was saved off while recovering from a hard disk corruption, but I'm not certain if that matters or if it is accurate. 1st machine didn't have any disk corruption problems. End of both outputs includes the bit generated when I pressed control-C ######### Machine 1 ######### Selecting /home/johnm/Maildir/dovecot.index.log.2 Releasing lockfile Removing still remembered temporary file /tmp/duplicity-c8o1G4-tempdir/mktemp-KV1uB3-47 Removing still remembered temporary file /tmp/duplicity-c8o1G4-tempdir/mkstemp-uPMJ5D-1 Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1509, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1503, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1352, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1485, in do_backup incremental_backup(sig_chain) File ""/usr/local/bin/duplicity"", line 633, in incremental_backup globals.backend) File ""/usr/local/bin/duplicity"", line 399, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 331, in GPGWriteFile data = block_iter.next().data File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 518, in next result = self.process(self.input_iter.next()) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 190, in get_delta_iter for new_path, sig_path in collated: File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 281, in collate2iters relem2 = riter2.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 350, in combine_path_iters refresh_triple_list(triple_list) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 336, in refresh_triple_list new_triple = get_triple(old_triple[1]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 322, in get_triple path = path_iter_list[iter_index].next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 234, in sigtar2path_iter for tarinfo in tf: File ""/usr/local/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 2470, in next tarinfo = self.tarfile.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 2319, in next self.fileobj.seek(self.offset) File ""/usr/lib/python2.7/gzip.py"", line 429, in seek self.read(1024) File ""/usr/lib/python2.7/gzip.py"", line 256, in read self._read(readsize) File ""/usr/lib/python2.7/gzip.py"", line 320, in _read self._read_eof() File ""/usr/lib/python2.7/gzip.py"", line 342, in _read_eof hex(self.crc))) IOError: CRC check failed 0x43872847 != 0x7d505e15L ^C close failed in file object destructor: IOError: [Errno 32] Broken pipe Exception KeyboardInterrupt in ignored ######### Machine 2 ######### note the error seems to happen on an old duplicity file that I moved away for safe keeping while restoring from a corrupted disk. maybe this is the cause of the problem? Selecting /var/old- corrupted/root/.cache/duplicity/a5af988f0861715f4b14466a0c03b4ed/duplicity- full.20141112T033613Z.manifest Releasing lockfile Removing still remembered temporary file /tmp/duplicity-6G4eQj- tempdir/mkstemp-IaPa4Q-1 Removing still remembered temporary file /tmp/duplicity-6G4eQj- tempdir/mktemp-JzVNTP-3 Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1509, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1503, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1352, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1485, in do_backup incremental_backup(sig_chain) File ""/usr/local/bin/duplicity"", line 633, in incremental_backup globals.backend) File ""/usr/local/bin/duplicity"", line 399, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 331, in GPGWriteFile data = block_iter.next().data File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 518, in next result = self.process(self.input_iter.next()) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 190, in get_delta_iter for new_path, sig_path in collated: File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 281, in collate2iters relem2 = riter2.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 350, in combine_path_iters refresh_triple_list(triple_list) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 336, in refresh_triple_list new_triple = get_triple(old_triple[1]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 322, in get_triple path = path_iter_list[iter_index].next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 234, in sigtar2path_iter for tarinfo in tf: File ""/usr/local/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 2470, in next tarinfo = self.tarfile.next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/tarfile.py"", line 2319, in next self.fileobj.seek(self.offset) File ""/usr/lib/python2.7/gzip.py"", line 429, in seek self.read(1024) File ""/usr/lib/python2.7/gzip.py"", line 256, in read self._read(readsize) File ""/usr/lib/python2.7/gzip.py"", line 307, in _read uncompress = self.decompress.decompress(buf) error: Error -3 while decompressing: invalid stored block lengths Removing still remembered temporary file /root/.cache/duplicity/a5af988f0861715f4b14466a0c03b4ed/duplicity-N8H4EE- tempdir/mktemp-rDiL3N-1 Removing still remembered temporary file /root/.cache/duplicity/a5af988f0861715f4b14466a0c03b4ed/duplicity-nA91wx- tempdir/mktemp-S2Tpqn-1 ^C close failed in file object destructor: IOError: [Errno 32] Broken pipe Exception KeyboardInterrupt in ignored ```",6 118020940,2014-12-17 08:01:07.100,Cannot restore backup (lp:#1403375),"[Original report](https://bugs.launchpad.net/bugs/1403375) created by **Max (khaberev)** ``` Trying to restore backup from external hdd drive. Got the following error: Traceback (most recent call last): File ""/usr/sbin/duplicity"", line 1500, in with_tempdir(main) File ""/usr/sbin/duplicity"", line 1494, in with_tempdir fn() File ""/usr/sbin/duplicity"", line 1343, in main do_backup(action) File ""/usr/sbin/duplicity"", line 1428, in do_backup restore(col_stats) File ""/usr/sbin/duplicity"", line 691, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 538, in Write_ROPaths ITR( ropath.index, ropath ) File ""/usr/lib/python2.7/site-packages/duplicity/lazy.py"", line 344, in __call__ last_branch.fast_process, args) File ""/usr/lib/python2.7/site-packages/duplicity/robust.py"", line 37, in check_common_error return function(*args) File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 591, in fast_process ropath.copy( self.base_path.new_index( index ) ) File ""/usr/lib/python2.7/site-packages/duplicity/path.py"", line 433, in copy other.writefileobj(self.open(""rb"")) File ""/usr/lib/python2.7/site-packages/duplicity/path.py"", line 609, in writefileobj buf = fin.read(_copy_blocksize) File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 204, in read if not self.addtobuffer(): File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 229, in addtobuffer self.tarinfo_list[0] = self.tar_iter.next() File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 336, in next self.set_tarfile() File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 325, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/sbin/duplicity"", line 727, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 144 duplicity 0.7.0 Python 2.7.9 Archlinux x64 up-to-date ntfs -> ext4 Full restore log from: script -c ""duplicity -v9 file:///run/media/khaberev/Elements/arch/myfiles /myfiles/1"" /myfiles/restore.log is attached ```",6 118020938,2014-12-09 02:13:41.651,Cleanup of temporary directory failed (lp:#1400563),"[Original report](https://bugs.launchpad.net/bugs/1400563) created by **Hawkwing (androlgenhald)** ``` This error has occurred several times now, always directly after unlocking my computer. Possible duplicate of 710198, but I'm not using a NAS, and I have gpg installed. duplicity 0.7.0 Python 2.7.8 Ubuntu 14.10 Target is ext4 over sftp command: duplicity full --include-globbing- filelist=/home/[redacted]/.duplicity_includes --volsize=1024 --asynchronous-upload / sftp://[redacted]//backup/[redacted] --verbosity=9 --gpg-options=--no-use-agent > duplicity.log 2>duplicity.err stdout: Using archive dir: /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea Using backup name: 6d0d96b8cb4d157d7cbbc82463f021ea Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Reading globbing filelist /home/[redacted]/.duplicity_includes Main action: full ================================================================================ duplicity 0.7.0 ($reldate) Args: /usr/bin/duplicity full --include-globbing- filelist=/home/[redacted]/.duplicity_includes --volsize=1024 --asynchronous-upload / sftp://[redacted]//backup/[redacted] --verbosity=9 --gpg-options=--no-use-agent Linux [redacted] 3.16.0-25-generic #33-Ubuntu SMP Tue Nov 4 12:06:54 UTC 2014 x86_64 x86_64 /usr/bin/python 2.7.8 (default, Oct 20 2014, 15:05:19) [GCC 4.9.1] ================================================================================ Using temporary directory /tmp/duplicity-AQdSTi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-AQdSTi-tempdir/mkstemp- hChhHH-1 Temp has 48586055680 available, backup will use approx 2469606195. Local and Remote metadata are synchronized, no sync needed. 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. PASSPHRASE variable not set, asking user. Using temporary directory /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea/duplicity- gsbUn1-tempdir Registering (mktemp) temporary file /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea/duplicity- gsbUn1-tempdir/mktemp-zlXRxd-1 Using temporary directory /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea/duplicity- MPzTq0-tempdir Registering (mktemp) temporary file /home/[redacted]/.cache/duplicity/6d0d96b8cb4d157d7cbbc82463f021ea/duplicity- MPzTq0-tempdir/mktemp-Qvxy6x-1 AsyncScheduler: instantiating at concurrency 1 Registering (mktemp) temporary file /tmp/duplicity-AQdSTi- tempdir/mktemp-56e5_u-2 Selecting / ................ Releasing lockfile Removing still remembered temporary file /tmp/duplicity-AQdSTi- tempdir/mktemp-9QXMNU-116 Removing still remembered temporary file /tmp/duplicity-AQdSTi- tempdir/mkstemp-hChhHH-1 Cleanup of temporary directory /tmp/duplicity-AQdSTi-tempdir failed - this is probably a bug. stderr: ssh: starting thread (client mode): 0x32604850L ssh: Connected (version 2.0, client OpenSSH_6.6.1p1) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Trying SSH agent key fe1edb20fc8c1b77f02012d04112f30b ssh: userauth is OK ssh: Authentication (publickey) successful! ssh: [chan 1] Max packet in: 32768 bytes ssh: [chan 1] Max packet out: 32768 bytes ssh: Secsh channel 1 opened. ssh: [chan 1] Sesch channel 1 request ok ssh: [chan 1] Opened sftp connection (server version 3) ssh: [chan 1] stat('/backup') ssh: [chan 1] stat('/backup') ssh: [chan 1] normalize('/backup') ssh: [chan 1] stat('/backup/[redacted]') ssh: [chan 1] mkdir('/backup/[redacted]', 511) ssh: [chan 1] stat('/backup/[redacted]') ssh: [chan 1] normalize('/backup/[redacted]') ssh: [chan 1] listdir('/backup/[redacted]/.') ssh: [chan 1] listdir('/backup/[redacted]/.') ssh: Sending global request ""keepalive@lag.net"" ssh: Sending global request ""keepalive@lag.net"" ssh: Sending global request ""keepalive@lag.net"" ssh: Sending global request ""keepalive@lag.net"" ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol1.difftar.gpg', 'wb') ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol1.difftar.gpg', 'wb') -> 00000000 ssh: Rekeying (hit 32677 packets, 536899588 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Rekeying (hit 32652 packets, 536896704 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/backup/[redacted]/duplicity- full.20141208T221741Z.vol1.difftar.gpg') ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol2.difftar.gpg', 'wb') ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol2.difftar.gpg', 'wb') -> 00000000 ssh: Rekeying (hit 32655 packets, 536882300 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Rekeying (hit 32651 packets, 536896572 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ................ ssh: Switch to new keys ... ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/backup/[redacted]/duplicity- full.20141208T221741Z.vol112.difftar.gpg') ssh: Sending global request ""keepalive@lag.net"" ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol113.difftar.gpg', 'wb') ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol113.difftar.gpg', 'wb') -> 00000000 ssh: Rekeying (hit 32655 packets, 536881788 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Rekeying (hit 32651 packets, 536896572 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/backup/[redacted]/duplicity- full.20141208T221741Z.vol113.difftar.gpg') ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol114.difftar.gpg', 'wb') ssh: [chan 1] open('/backup/[redacted]/duplicity- full.20141208T221741Z.vol114.difftar.gpg', 'wb') -> 00000000 ssh: Rekeying (hit 32654 packets, 536877256 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Rekeying (hit 32653 packets, 536896756 bytes sent) ssh: kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group- exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman- group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish- cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael- cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac- md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac- sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac- ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac- md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac- ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/backup/[redacted]/duplicity- full.20141208T221741Z.vol114.difftar.gpg') Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1500, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1494, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1343, in main do_backup(action) File ""/usr/bin/duplicity"", line 1464, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 536, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 397, in write_multivol globals.gpg_profile, globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 327, in GPGWriteFile bytes_to_go = data_size - get_current_size() File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 320, in get_current_size return os.stat(filename).st_size OSError: [Errno 2] No such file or directory: '/tmp/duplicity-AQdSTi- tempdir/mktemp-9QXMNU-116' ```",14 118020932,2014-12-06 01:07:36.715,par2 is required to run duplicity ./setup.py test (lp:#1399843),"[Original report](https://bugs.launchpad.net/bugs/1399843) created by **Gábor Lipták (gliptak)** ``` duplicity 0.8.x (HEAD) python 2.7.6 Ubuntu 14.04 Running .setup.py test produces following errors (it might be documented somewhere as a requirement): test_pylint (testing.test_code.CodeTest) ... skipped 'Must set environment var RUN_CODE_TESTS=1' ====================================================================== ERROR: test_delete (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 72, in test_delete self.backend._put(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 90, in put self.transfer(self.wrapped_backend._put, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== ERROR: test_delete_clean (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 84, in test_delete_clean self.backend._put(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 90, in put self.transfer(self.wrapped_backend._put, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== ERROR: test_get (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 51, in test_get self.backend._put(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 90, in put self.transfer(self.wrapped_backend._put, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== ERROR: test_list (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 59, in test_list self.backend._put(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 90, in put self.transfer(self.wrapped_backend._put, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== ERROR: test_move (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 128, in test_move self.backend._move(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 93, in move self.transfer(self.wrapped_backend._move, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== ERROR: test_query_exists (testing.unit.test_backend_instance.Par2BackendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_backend_instance.py"", line 141, in test_query_exists self.backend._put(self.local, 'a') File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 90, in put self.transfer(self.wrapped_backend._put, local, remote) File ""/tmp/duplicity/duplicity/backends/par2backend.py"", line 76, in transfer out, returncode = pexpect.run(par2create, -1, True) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 213, in run env=env, _spawn=spawn) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 230, in _run **kwargs) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 485, in __init__ self._spawn(command, args) File ""/usr/lib/python2.7/dist-packages/pexpect/__init__.py"", line 590, in _spawn 'executable: %s.' % self.command) ExceptionPexpect: The command was not found or was not executable: par2. ====================================================================== FAIL: test_diff2 (testing.unit.test_diffdir.DDTest) Another diff test - this one involves multivol support ---------------------------------------------------------------------- Traceback (most recent call last): File ""/tmp/duplicity/testing/unit/test_diffdir.py"", line 150, in test_diff2 assert not os.system(""rdiff patch testfiles/dir2/largefile "" AssertionError ---------------------------------------------------------------------- Ran 305 tests in 421.621s FAILED (failures=1, errors=6, skipped=3) ```",10 118020890,2014-11-26 12:29:58.380,Duplicity using sftp loses session during upload (lp:#1396579),"[Original report](https://bugs.launchpad.net/bugs/1396579) created by **Adam Watkins (acwatkins)** ``` Duplicity 0.6.24-1~bpo70+1 tried paramiko (1.7.7.1-3.1), (1.10.1-1~bpo70+1), and (1.15.1, latest from pip install) OS Debian Wheezy, tried both non-backport and backport duplicity. Target Raspbian on raspberry pi, BTRFS. I initially suspected the Raspberry Pi for power issues, but verified the bug without any voltage drop and a solid session that keeps running after the error occurs. (3 tmux panes, one doing nothing, one running htop, and one polling temp, all stay up during error) The problem seems to happen at random times, sometimes very quickly, sometimes it takes several hundred volumes. I have tried --timeout and various ssh timeout configs both on the server and client side. The other sessions never time out even when doing nothing, so it doesn't seem to be ssh session timeout related. It always seems to happen mid-copy, it starts uploading then stops, the raspberry pi cpu goes to idle and duplicity hangs for quite a while before reporting any error. It then goes through all of the retries very quickly and dies. If you start it over again right away, it works again for a while. First 200 and last 200 lines of the log file at verbosity 9: Using archive dir: /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12 Using backup name: 5cfb1d777d89aa7a0480ea189be37c12 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded ssh: starting thread (client mode): 0x29fc950L ssh: Connected (version 2.0, client OpenSSH_6.0p1) ssh: kex algos:[u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh- sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman- group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman- group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'hmac- sha2-256', u'hmac-sha2-256-96', u'hmac-sha2-512', u'hmac-sha2-512-96', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac- md5-96'] server mac:[u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'hmac-sha2-256', u'hmac-sha2-256-96', u'hmac-sha2-512', u'hmac- sha2-512-96', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac- sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Trying discovered key sshkey in /root/.ssh/id_rsa ssh: userauth is OK ssh: Authentication (publickey) successful! ssh: [chan 1] Max packet in: 32768 bytes ssh: [chan 1] Max packet out: 32768 bytes ssh: Secsh channel 1 opened. ssh: [chan 1] Sesch channel 1 request ok ssh: [chan 1] Opened sftp connection (server version 3) ssh: [chan 1] stat('/srv') ssh: [chan 1] stat('/srv') ssh: [chan 1] normalize('/srv') ssh: [chan 1] stat('/srv/backup') ssh: [chan 1] stat('/srv/backup') ssh: [chan 1] normalize('/srv/backup') Main action: inc ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/bin/duplicity --archive-dir=/srv/duplicity-cache --verbosity 9 --timeout=100 --encrypt-key FF288AD2 --full-if-older-than 6M /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures scp://user@servername.com//srv/backup Linux backupserver 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 /usr/bin/python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] ================================================================================ Using temporary directory /tmp/duplicity-OyokD_-tempdir Registering (mkstemp) temporary file /tmp/duplicity-OyokD_-tempdir/mkstemp- wNLSVi-1 Temp has 4836524032 available, backup will use approx 34078720. ssh: [chan 1] listdir('/srv/backup/.') Local and Remote metadata are synchronized, no sync needed. ssh: [chan 1] listdir('/srv/backup/.') 0 files exist on backend 6 files exist in cache Extracting backup chains from list of files: [u'duplicity-full- signatures.20141123T173040Z.sigtar.part', u'duplicity- full.20141123T173040Z.manifest.part'] File duplicity-full-signatures.20141123T173040Z.sigtar.part is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20141123T173040Z.sigtar.part' File duplicity-full.20141123T173040Z.manifest.part is not part of a known set; creating new set Found backup chain [Sun Nov 23 12:30:40 2014]-[Sun Nov 23 12:30:40 2014] Last full backup left a partial set, restarting. Last full backup date: Sun Nov 23 12:30:40 2014 Collection Status ----------------- Connecting with backend: SSHParamikoBackend Archive dir: /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12 Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Sun Nov 23 12:30:40 2014 Chain end time: Sun Nov 23 12:30:40 2014 Number of contained backup sets: 1 Total number of contained volumes: 0 Type of backup set: Time: Num volumes: ------------------------- No orphaned or incomplete backup sets found. RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. Deleting /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity- full-signatures.20141123T173040Z.sigtar.part Deleting /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity- full.20141123T173040Z.manifest.part Releasing lockfile Using archive dir: /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12 Using backup name: 5cfb1d777d89aa7a0480ea189be37c12 Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.ftpsbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.~par2wrapperbackend Succeeded ssh: starting thread (client mode): 0x1866890L ssh: Connected (version 2.0, client OpenSSH_6.0p1) ssh: kex algos:[u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh- sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman- group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman- group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'hmac- sha2-256', u'hmac-sha2-256-96', u'hmac-sha2-512', u'hmac-sha2-512-96', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac- md5-96'] server mac:[u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'hmac-sha2-256', u'hmac-sha2-256-96', u'hmac-sha2-512', u'hmac- sha2-512-96', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac- sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Ciphers agreed: local=aes128-ctr, remote=aes128-ctr ssh: using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none ssh: Switch to new keys ... ssh: Trying discovered key sshkey in /root/.ssh/id_rsa ssh: userauth is OK ssh: Authentication (publickey) successful! ssh: [chan 1] Max packet in: 32768 bytes ssh: [chan 1] Max packet out: 32768 bytes ssh: Secsh channel 1 opened. ssh: [chan 1] Sesch channel 1 request ok ssh: [chan 1] Opened sftp connection (server version 3) ssh: [chan 1] stat('/srv') ssh: [chan 1] stat('/srv') ssh: [chan 1] normalize('/srv') ssh: [chan 1] stat('/srv/backup') ssh: [chan 1] stat('/srv/backup') ssh: [chan 1] normalize('/srv/backup') Main action: inc ================================================================================ duplicity 0.6.24 (May 09, 2014) Args: /usr/bin/duplicity --archive-dir=/srv/duplicity-cache --verbosity 9 --timeout=100 --encrypt-key FF288AD2 --full-if-older-than 6M /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures scp://user@servername.com//srv/backup Linux backupserver 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 /usr/bin/python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] ================================================================================ Using temporary directory /tmp/duplicity-Zb7zLH-tempdir Registering (mkstemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mkstemp- d_K6J0-1 Temp has 4836519936 available, backup will use approx 34078720. ssh: [chan 1] listdir('/srv/backup/.') Local and Remote metadata are synchronized, no sync needed. ssh: [chan 1] listdir('/srv/backup/.') 0 files exist on backend 4 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Last full backup is too old, forcing full backup Collection Status ----------------- Connecting with backend: SSHParamikoBackend Archive dir: /srv/duplicity-cache/5cfb1d777d89aa7a0480ea189be37c12 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Using temporary directory /srv/duplicity- cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity-1KALrH-tempdir Registering (mktemp) temporary file /srv/duplicity- cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity-1KALrH-tempdir/mktemp- ufav9d-1 Using temporary directory /srv/duplicity- cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity-RP2cbc-tempdir Registering (mktemp) temporary file /srv/duplicity- cache/5cfb1d777d89aa7a0480ea189be37c12/duplicity-RP2cbc- tempdir/mktemp-f1nRxq-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH- tempdir/mktemp-U1K7pR-2 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures Comparing . and None Getting delta of (. dir) and None A . Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/.DS_Store Comparing .DS_Store and None Getting delta of (.DS_Store reg) and None A .DS_Store Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/._.DS_Store Comparing ._.DS_Store and None Getting delta of (._.DS_Store reg) and None A ._.DS_Store Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/0-3 Months.lrsmcol Comparing 0-3 Months.lrsmcol and None Getting delta of (0-3 Months.lrsmcol reg) and None A 0-3 Months.lrsmcol Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008 Comparing 2008 and None Getting delta of (2008 dir) and None A 2008 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday Comparing 2008/2008-08-23 - Dad's Birthday and None Getting delta of (2008/2008-08-23 - Dad's Birthday dir) and None A 2008/2008-08-23 - Dad's Birthday Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-001.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-001.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-001.jpg reg) and None A 2008/2008-08-23 - Dad's Birthday/20080823-001.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-003.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-003.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-003.jpg reg) and None A 2008/2008-08-23 - Dad's Birthday/20080823-003.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-005.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-005.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-005.jpg reg) and None A 2008/2008-08-23 - Dad's Birthday/20080823-005.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-008.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-008.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-008.jpg reg) and None A 2008/2008-08-23 - Dad's Birthday/20080823-008.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-009.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-009.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-009.jpg reg) and None A 2008/2008-08-23 - Dad's Birthday/20080823-009.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/20080823-017.jpg Comparing 2008/2008-08-23 - Dad's Birthday/20080823-017.jpg and None Getting delta of (2008/2008-08-23 - Dad's Birthday/20080823-017.jpg reg) and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol201+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol201+14.PAR2 AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol47.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol47.difftar.gpg', 'wb') -> 00000000 ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/srv/backup/duplicity- full.20141126T024349Z.vol47.difftar.gpg') Deleting /tmp/duplicity-Zb7zLH-tempdir/mktemp-AeDjjP-48 Forgetting temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp-AeDjjP-48 AsyncScheduler: task completed successfully Processed volume 47 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp- cc8eGT-49 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol215+14.PAR2 Comparing 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol215+14.PAR2 and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol215+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol215+14.PAR2 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol229+14.PAR2 Comparing 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol229+14.PAR2 and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol229+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol229+14.PAR2 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol243+14.PAR2 Comparing 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol243+14.PAR2 and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol243+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol243+14.PAR2 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol257+14.PAR2 Comparing 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol257+14.PAR2 and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol257+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol257+14.PAR2 AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol48.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol48.difftar.gpg', 'wb') -> 00000000 ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/srv/backup/duplicity- full.20141126T024349Z.vol48.difftar.gpg') Deleting /tmp/duplicity-Zb7zLH-tempdir/mktemp-cc8eGT-49 Forgetting temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp-cc8eGT-49 AsyncScheduler: task completed successfully Processed volume 48 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp- SVF2Wr-50 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol271+14.PAR2 Comparing 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol271+14.PAR2 and None Getting delta of (2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol271+14.PAR2 reg) and None A 2008/2008-08-23 - Dad's Birthday/Originals/20080823.vol271+14.PAR2 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary dir) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-001.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-001.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-001.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-001.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-002.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-002.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-002.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-002.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-003.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-003.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-003.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-003.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-004.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-004.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-004.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-004.jpg AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol49.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol49.difftar.gpg', 'wb') -> 00000000 ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/srv/backup/duplicity- full.20141126T024349Z.vol49.difftar.gpg') Deleting /tmp/duplicity-Zb7zLH-tempdir/mktemp-SVF2Wr-50 Forgetting temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp-SVF2Wr-50 AsyncScheduler: task completed successfully Processed volume 49 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp- geFkFD-51 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-005.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-005.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-005.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-005.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-006.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-006.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-006.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-006.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-007.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-007.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-007.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-007.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-008.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-008.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-008.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-008.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-009.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-009.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-009.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-009.jpg AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol50.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol50.difftar.gpg', 'wb') -> 00000000 ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/srv/backup/duplicity- full.20141126T024349Z.vol50.difftar.gpg') Deleting /tmp/duplicity-Zb7zLH-tempdir/mktemp-geFkFD-51 Forgetting temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp-geFkFD-51 AsyncScheduler: task completed successfully Processed volume 50 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp- Ryoxw4-52 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-010.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-010.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-010.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-010.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-011.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-011.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-011.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-011.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-012.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-012.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-012.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-012.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-013.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-013.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-013.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-013.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-014.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-014.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-014.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-014.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-015.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-015.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-015.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-015.jpg AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol51.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol51.difftar.gpg', 'wb') -> 00000000 ssh: [chan 1] close(00000000) ssh: [chan 1] stat('/srv/backup/duplicity- full.20141126T024349Z.vol51.difftar.gpg') Deleting /tmp/duplicity-Zb7zLH-tempdir/mktemp-Ryoxw4-52 Forgetting temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp-Ryoxw4-52 AsyncScheduler: task completed successfully Processed volume 51 Registering (mktemp) temporary file /tmp/duplicity-Zb7zLH-tempdir/mktemp- TpPauK-53 Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-016.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-016.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-016.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-016.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-017.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-017.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-017.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-017.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-018.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-018.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-018.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-018.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-019.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-019.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-019.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-019.jpg Selecting /srv/backup/normalRetention/.sync/fileserver/srv/smb/pictures/2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-020.jpg Comparing 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-020.jpg and None Getting delta of (2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-020.jpg reg) and None A 2008/2008-10-04 - Pat and Harry's 50th Anniversary/20081004-020.jpg AsyncScheduler: running task synchronously (asynchronicity disabled) ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') -> 00000000 ssh: Sending global request ""keepalive@lag.net"" ssh: EOF in transport thread ssh: [chan 1] close(00000000) sftp put of /tmp/duplicity-Zb7zLH-tempdir/mktemp-TpPauK-53 (as duplicity- full.20141126T024349Z.vol52.difftar.gpg) failed: (Try 1 of 5) Will retry in 10 seconds. ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') sftp put of /tmp/duplicity-Zb7zLH-tempdir/mktemp-TpPauK-53 (as duplicity- full.20141126T024349Z.vol52.difftar.gpg) failed: Socket is closed (Try 2 of 5) Will retry in 10 seconds. ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') sftp put of /tmp/duplicity-Zb7zLH-tempdir/mktemp-TpPauK-53 (as duplicity- full.20141126T024349Z.vol52.difftar.gpg) failed: Socket is closed (Try 3 of 5) Will retry in 10 seconds. ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') sftp put of /tmp/duplicity-Zb7zLH-tempdir/mktemp-TpPauK-53 (as duplicity- full.20141126T024349Z.vol52.difftar.gpg) failed: Socket is closed (Try 4 of 5) Will retry in 10 seconds. ssh: [chan 1] open('/srv/backup/duplicity- full.20141126T024349Z.vol52.difftar.gpg', 'wb') sftp put of /tmp/duplicity-Zb7zLH-tempdir/mktemp-TpPauK-53 (as duplicity- full.20141126T024349Z.vol52.difftar.gpg) failed: Socket is closed (Try 5 of 5) Will retry in 10 seconds. Releasing lockfile Removing still remembered temporary file /tmp/duplicity-Zb7zLH- tempdir/mkstemp-d_K6J0-1 Removing still remembered temporary file /tmp/duplicity-Zb7zLH- tempdir/mktemp-TpPauK-53 Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1509, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1503, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1352, in main do_backup(action) File ""/usr/bin/duplicity"", line 1473, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 545, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 427, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 145, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib/python2.7/dist-packages/duplicity/asyncscheduler.py"", line 171, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 426, in async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num), File ""/usr/bin/duplicity"", line 315, in put backend.put(tdp, dest_filename) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/_ssh_paramiko.py"", line 306, in put raise BackendException(""Giving up trying to upload '%s' after %d attempts"" % (remote_filename,n)) BackendException: Giving up trying to upload 'duplicity- full.20141126T024349Z.vol52.difftar.gpg' after 5 attempts BackendException: Giving up trying to upload 'duplicity- full.20141126T024349Z.vol52.difftar.gpg' after 5 attempts ```",6 118022251,2022-07-07 05:42:14.767,duplicity does NOT close properly (lp:#1980908),"[Original report](https://bugs.launchpad.net/bugs/1980908) created by **Kenneth Loafman (kenneth-loafman)** ``` Duplicity process stays running after closing deja-dup backups app from Ubuntu desktop. Openning up deja-dup again, go to Restore tab, error message shows ""Another duplicity instance is already running with this archive directory"". `ps -ef | grep duplicity` shows the process still running. ```",6 118018349,2022-03-21 19:18:39.611,snap contains python 3.8 and 3.9 libs (lp:#1965814),"[Original report](https://bugs.launchpad.net/bugs/1965814) created by **Kenneth Loafman (kenneth-loafman)** ``` Using the attached snapcraft.yaml, snap builds with conflicting libs: $ find /snap/duplicity/current/ -type d -name 'python3*' /snap/duplicity/current/usr/lib/python3 /snap/duplicity/current/usr/lib/python3.8 <-- this should be here **** /snap/duplicity/current/usr/lib/python3.9 <-- this one causes confusion **** Since duplicity uses _librsync*.so and that's only installed in python3.8, it fails when python 3.9 is used, as is done on Debian 10. It's only one executable and should only have one version of python supplied in the snap. ```",6 118022236,2022-03-01 15:58:23.479,SMB Mount to backup directory prevents mounting parent directories of share (lp:#1962593),"[Original report](https://bugs.launchpad.net/bugs/1962593) created by **Dan Sorak (dsorak)** ``` May also apply to the ""duplicity"" package and is related to network ""mount""s: To reproduce: * Set up a writeable SMB share: smb://1.2.3.4/backup * Create a subdir: smb://1.2.3.4/backup/UbuntuBackup * Using ""deja-dup"" -> ""Storage location"" set:     Storage location: Network Server     Network Location: smb://1.2.3.4/backup/UbuntuBackup     Folder: MyMachineBackup * REBOOT THE MACHINE (or clear any cached mounts to ""smb://1.2.3.4/backup"") * Select ""deja-dup"" -> ""Overview"" -> ""Back Up Now..."" * Allow the backup to start, but not finish * Go to Nautilus (file browser) -> ""+ Other Locations""   - At the bottom set ""Connect to server"" to: smb://1.2.3.4/backup Result: * The parent SMB directory ""smb://1.2.3.4/backup"" is inaccessible because the ""UbuntuBackup"" subdir is currently mounted * Can happen at semi-random times if the backup is on a schedule * This is especially a problem when backing up remotely over a VPN which can take a long time * Prevents access to all parent folders of the backup folder * Nautilus will hang, unable to mount the ""backup"" parent directory * Also a problem if the ""smb://1.2.3.4/backup"" folder is bookmarked in nautilus * This problem is persistent throughout backup process and for a period of time afterwards until the mount to ""UbuntuBackup"" times out or the backup process exits Possible solutions: * Add a configuration option to deja-dup/duplicity to specify the ""Share root"" and mount that first before accessing the configured ""Network Location"" * Have deja-dup or duplicity attempt to mount the parent directory/directories first by starting at the root of the SMB share and walking down the directory tree until a successful mount is made. In the above example, start by attempting to mount ""smb://1.2.3.4/backup"" first and then ""smb://1.2.3.4/backup/UbuntuBackup"" (in the case of deeper folder structures, try each one successively) ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: deja-dup 40.7-0ubuntu1 ProcVersionSignature: Ubuntu 5.13.0-30.33~20.04.1-generic 5.13.19 Uname: Linux 5.13.0-30-generic x86_64 NonfreeKernelModules: nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu27.21 Architecture: amd64 CasperMD5CheckResult: skip CurrentDesktop: ubuntu:GNOME Date: Tue Mar 1 08:51:47 2022 ExecutablePath: /usr/bin/deja-dup InstallationDate: Installed on 2021-09-13 (168 days ago) InstallationMedia: Ubuntu 20.04.3 LTS ""Focal Fossa"" - Release amd64 (20210819) ProcEnviron:  XDG_RUNTIME_DIR=  SHELL=/bin/bash  PATH=(custom, user)  LANG=en_US.UTF-8 SourcePackage: deja-dup UpgradeStatus: No upgrade log present (probably fresh install) ``` Original tags: amd64 apport-bug backup focal network smb",6 118022232,2021-10-08 21:05:32.071,Deja-dup has stopped working since upgrade to Ubuntu-21.04 (lp:#1946528),"[Original report](https://bugs.launchpad.net/bugs/1946528) created by **sTiVo (stevecoh1)** ``` I upgraded to Ubuntu-21.04 (from 20.04->20.10->21.04) and now I find that my deja-dup backup is failing with the following obscure error message: Failed to read /tmp/duplicity-6k2ojcdb-tempdir/mktemp-s39ei7up-2: (, EOFError('Compressed file ended before the end-of-stream marker was reached'), ) A search on this error message produced a similar issue: https://answers.launchpad.net/duplic...uestion/693039 from 2020 which died for a lack of response. Can someone provide me some help on how to proceed here to get my backups working again? Thanks. OS: Ubuntu 21.04 ```",8 118022221,2021-06-24 19:36:00.284,"duplicity falsely reports B2 python SDK not installed, fails backup to B2 (lp:#1933540)","[Original report](https://bugs.launchpad.net/bugs/1933540) created by **Kenneth Loafman (kenneth-loafman)** ``` Description: Ubuntu 21.04 Release: 21.04 duplicity: Installed: 0.8.17-1build1 Candidate: 0.8.17-1build1 Version table: *** 0.8.17-1build1 500 500 http://us.archive.ubuntu.com/ubuntu hirsute/main amd64 Packages 100 /var/lib/dpkg/status What I expected to happen: duplicity successfully backs up to B2 when I run `duplicity ~ b2://@` What happened instead: duplicity immediately fails with the error ""BackendException: B2 backend requires B2 Python SDK (pip install b2sdk)"". duplicity was working fine for me backing up my local machine to BackBlaze B2, when I was running 20.10. After upgrading to 21.04, now I get this problem. Both b2 and b2sdk are installed by pip and pip3, with and without sudo. I do use pyenv to manage different Python versions but pyenv global is set to system when I use pip to try to troubleshoot this, so this should not be interfering ProblemType: Bug DistroRelease: Ubuntu 21.04 Package: duplicity 0.8.17-1build1 ProcVersionSignature: Ubuntu 5.11.0-22.23-generic 5.11.21 Uname: Linux 5.11.0-22-generic x86_64 ApportVersion: 2.20.11-0ubuntu65.1 Architecture: amd64 CasperMD5CheckResult: unknown CurrentDesktop: Regolith:GNOME-Flashback:GNOME Date: Thu Jun 24 12:28:50 2021 InstallationDate: Installed on 2020-11-20 (215 days ago) InstallationMedia: Ubuntu 20.10 ""Groovy Gorilla"" - Release amd64 (20201022) ProcEnviron: TERM=alacritty PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: duplicity UpgradeStatus: Upgraded to hirsute on 2021-06-15 (8 days ago) ``` Original tags: amd64 apport-bug hirsute",6 118022218,2021-06-22 17:26:27.131,too many files open (lp:#1933261),"[Original report](https://bugs.launchpad.net/bugs/1933261) created by **Heiner Kuhlmann (heiner-k)** ``` System: openSUSE Leep 15.2 / duplicity 0.8.11 To prevent useless hints, I am aware that my problem with the mounted dav directory can easily be solved by duplicity / webdavs://.... But webdav is not the problem, it is a shortcoming in duplicity. This problem occurs only in special cases: duplicity / file:///Save/DAV_mount /Save/DAV_mount is mounted via davf2. The system log reports the error open files exceed max cache size by ... In the directory /var/cache/davfs2/webdav.magentacloud.de+Save-Cloud+root/ there are many files like duplicity-inc.20210622T104338Z.to.20210622T142854Z.vol17.difftar.gpg-kIVuGX ... duplicity- inc.20210622T104338Z.to.20210622T142854Z.vol140.difftar.gpg-2FrxIW Almost all of the files have already been transferred to the webdav directory and could be deleted in the dav cache. dav cannot delete because these files are still open. The cause is probably that duplicity keeps a lot of files open even though they have already been completed. Presumably, duplicity does not close files immediately after they have been completely written. In the spirit of good programming practice, every resource - including open files - should be released as soon as possible. ```",6 118019117,2021-03-14 01:43:29.877,"Certain files aren't restored, despite being in the repository (lp:#1919054)","[Original report](https://bugs.launchpad.net/bugs/1919054) created by **Xenon Fiber System (xenonfiber)** ``` I'm running a restore operation on a local repository, and I'm able to get the vast majority of the files extracted, but there are a couple that have a fatal error. python2: ERROR: (rs_file_copy_cb) unexpected eof on fd494 python2: ERROR: (rs_job_complete) patch job failed: unexpected end of input Error 'librsync error 103 while in patch cycle' patching [FILE] In case it's relevant, all the successfully extracted files still throw another error, but from what I've read, it's just due to the ownership permissions not being successfully restored, but otherwise extracting just fine: Error '[Errno 1] Operation not permitted: '/[FILE]'' processing [FILE] Duplicity 0.7.17 Python 2.7.17 Linux Mint 19.3 ext4 Filesystem ```",6 118019109,2021-03-13 10:32:05.430,Very bad performance when deleting files from mediafire backend (lp:#1919020),"[Original report](https://bugs.launchpad.net/bugs/1919020) created by **Jose Riha (jose1711)** ``` Under certain conditions the performance of removing old backups from mediafire backend can be extremely bad - it can take hours to process such request. The main contributor to this slowdown is a combination of - using mediafire in multibackend and the fact that file lists are not cached per store + - multiple calls to list mediafire folder contents (e. g. in multibackend.py's _delete function or mediafire backend's delete_file function which is still using API 1.3 for deletion. Unlike the latest version version 1.3 does not allow to delete the file by specifying full path and for each request the whole folder contents must be retrieved) + - large amount of files (e. g. 4 000+) in MF directory duplicity 0.8.18 python 3.9.2 Arch Linux, x86_64 ```",6 118022217,2021-01-17 03:19:50.852,duplicity 0.8.18 build failure on darwin_arm64 (lp:#1912084),"[Original report](https://bugs.launchpad.net/bugs/1912084) created by **Rui Chen (chenrui333)** ``` 👋 trying to build the latest release, but run into some build issue. The error log is as below:
build failure ``` Preparing wheel metadata: started Running command /opt/homebrew/Cellar/duplicity/0.8.18/libexec/bin/python3.9 /opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /private/tmp/tmpi8zpbk2y Traceback (most recent call last): File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py"", line 280, in main() File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py"", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py"", line 133, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/build_meta.py"", line 161, in prepare_metadata_for_build_wheel self.run_setup() File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/build_meta.py"", line 145, in run_setup exec(compile(code, __file__, 'exec'), locals()) File ""setup.py"", line 46, in setup( File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/__init__.py"", line 153, in setup return distutils.core.setup(**attrs) File ""/opt/homebrew/Cellar/python@3.9/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py"", line 108, in setup _setup_distribution = dist = klass(attrs) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/dist.py"", line 423, in __init__ _Distribution.__init__(self, { File ""/opt/homebrew/Cellar/python@3.9/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py"", line 292, in __init__ self.finalize_options() File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/dist.py"", line 695, in finalize_options ep(self) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/setuptools/dist.py"", line 702, in _finalize_setup_keywords ep.load()(self, ep.name, value) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/cffi/setuptools_ext.py"", line 219, in cffi_modules add_cffi_module(dist, cffi_module) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/cffi/setuptools_ext.py"", line 49, in add_cffi_module execfile(build_file_name, mod_vars) File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/cffi/setuptools_ext.py"", line 25, in execfile exec(code, glob, glob) File ""src/build_bcrypt.py"", line 21, in ffi = FFI() File ""/private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/cffi/api.py"", line 48, in __init__ import _cffi_backend as backend ImportError: dlopen(/private/tmp/pip-build-env- zgnyk0yt/overlay/lib/python3.9/site- packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure Referenced from: /private/tmp/pip-build-env- zgnyk0yt/overlay/lib/python3.9/site- packages/_cffi_backend.cpython-39-darwin.so Expected in: flat namespace in /private/tmp/pip-build-env-zgnyk0yt/overlay/lib/python3.9/site- packages/_cffi_backend.cpython-39-darwin.so Preparing wheel metadata: finished with status 'error' ERROR: Command errored out with exit status 1: /opt/homebrew/Cellar/duplicity/0.8.18/libexec/bin/python3.9 /opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /private/tmp/tmpi8zpbk2y Check the logs for full command output. Exception information: Traceback (most recent call last): File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/cli/base_command.py"", line 224, in _main status = self.run(options, args) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/cli/req_command.py"", line 180, in wrapper return func(self, options, args) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/commands/install.py"", line 320, in run requirement_set = resolver.resolve( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/resolver.py"", line 121, in resolve self._result = resolver.resolve( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/resolvelib/resolvers.py"", line 445, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/resolvelib/resolvers.py"", line 310, in resolve name, crit = self._merge_into_criterion(r, parent=None) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/resolvelib/resolvers.py"", line 169, in _merge_into_criterion name = self._p.identify(requirement) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/provider.py"", line 60, in identify return dependency.name File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/requirements.py"", line 41, in name return self.candidate.name File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/candidates.py"", line 188, in name return self.project_name File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/candidates.py"", line 182, in project_name self._name = canonicalize_name(self.dist.project_name) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/candidates.py"", line 239, in dist self._prepare() File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/candidates.py"", line 226, in _prepare dist = self._prepare_distribution() File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/resolution/resolvelib/candidates.py"", line 318, in _prepare_distribution return self._factory.preparer.prepare_linked_requirement( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/operations/prepare.py"", line 480, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/operations/prepare.py"", line 523, in _prepare_linked_requirement dist = _get_prepared_distribution( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/operations/prepare.py"", line 88, in _get_prepared_distribution abstract_dist.prepare_distribution_metadata(finder, build_isolation) File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/distributions/sdist.py"", line 41, in prepare_distribution_metadata self.req.prepare_metadata() File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/req/req_install.py"", line 555, in prepare_metadata self.metadata_directory = self._generate_metadata() File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/req/req_install.py"", line 540, in _generate_metadata return generate_metadata( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/operations/build/metadata.py"", line 34, in generate_metadata distinfo_dir = backend.prepare_metadata_for_build_wheel( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/wrappers.py"", line 193, in prepare_metadata_for_build_wheel return self._call_hook('prepare_metadata_for_build_wheel', { File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/wrappers.py"", line 274, in _call_hook self._subprocess_runner( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/utils/subprocess.py"", line 271, in runner call_subprocess( File ""/opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_internal/utils/subprocess.py"", line 240, in call_subprocess raise InstallationError(exc_msg) pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /opt/homebrew/Cellar/duplicity/0.8.18/libexec/bin/python3.9 /opt/homebrew/Cellar/duplicity/0.8.18/libexec/lib/python3.9/site- packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /private/tmp/tmpi8zpbk2y Check the logs for full command output. Removed file:///private/tmp/duplicity-- bcrypt-20210112-91108-1ghubc0/bcrypt-3.2.0 from build tracker '/private/tmp/pip-req-tracker-xyw0nei5' Removed build tracker: '/private/tmp/pip-req-tracker-xyw0nei5' ```
Full build log is in here, https://github.com/Homebrew/homebrew- core/runs/1689770722 relates to https://github.com/Homebrew/homebrew-core/pull/68653 ```",6 118022984,2020-09-03 09:41:57.271,"Duplicity fails backups with message ""SHA1 hash mismatch for file..."" every single time (lp:#1894073)","[Original report](https://bugs.launchpad.net/bugs/1894073) created by **Santiago Gala (sgala)** ``` Backup has been going on ok since some days ago. Then the backups started failing with message: Invalid data - SHA1 hash mismatch for file: duplicity-inc.20200828T014540Z.to.20200829T012824Z.vol1.difftar.gz Calculated hash: 06f2c3af5c64b989db2507d842a64e19d1cb7206 Manifest hash: 77788b1c0808968b82904e3b0f688143fbebe1d0 Without any way offered to take an action to proceed. ```",6 118022478,2020-06-10 09:39:19.359,signature file cropped at 2.1GB (lp:#1882916),"[Original report](https://bugs.launchpad.net/bugs/1882916) created by **torben (crittac)** ``` version: duplicity 0.7.18.2 python: Python 2.7.16 RaspberryPi OS release 2020-05-27 architecture: armv7l GNU/Linux command: sudo -u www-data PASSPHRASE="""" duplicity --encrypt-key GARBELD /var/www/nextcloud/data/ file://. --exclude ""**Music/**"" -v9 Issue: the signature files seems to be cut of at 2.1 GB. Backing up the same data on 64Bit Ubuntu resulted in a 2.7GB signature file. The result of this is any other following backups fails with `File ""/usr/lib/python2.7/tarfile.py"", line 2352, in next raise ReadError(""unexpected end of data"") ReadError: unexpected end of data ` Logs: head -n200 /tmp/nc_backup.log gpg: WARNING: unsafe permissions on homedir '/var/www/.gnupg' Using archive dir: /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364 Using backup name: db7c0f583d45b95c45d4d76e12d54364 GPG binary is gpg, version 2.2.12 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: inc Acquiring lockfile /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/lockfile ================================================================================ duplicity 0.7.18.2 (October 17, 2018) Args: /usr/bin/duplicity --encrypt-key E536A8C3 /var/www/nextcloud/data/ file://. --exclude **Music/** -v9 Linux system 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l /usr/bin/python2 2.7.16 (default, Oct 10 2019, 22:02:15) [GCC 8.3.0] ================================================================================ Using temporary directory /tmp/duplicity-bWiKym-tempdir Registering (mkstemp) temporary file /tmp/duplicity-bWiKym-tempdir/mkstemp- sJQtEF-1 Temp has 106878672896 available, backup will use approx 272629760. Synchronizing remote metadata to local cache... Deleting local /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity-full- signatures.20200608T112308Z.sigtar.gz (not authoritative at backend). Deleting local /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity- full.20200608T112308Z.manifest (not authoritative at backend). 1 file exists on backend 1 file exists in cache Extracting backup chains from list of files: [u'lost+found'] File lost+found is not part of a known set; creating new set Ignoring file (rejected by backup set) 'lost+found' Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. No signatures found, switching to full backup. Using temporary directory /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity- xhQoR2-tempdir Registering (mktemp) temporary file /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity- xhQoR2-tempdir/mktemp-mI0pKK-1 Using temporary directory /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity- cj0cFD-tempdir Registering (mktemp) temporary file /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/duplicity- cj0cFD-tempdir/mktemp-_9otuh-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-bWiKym- tempdir/mktemp-9REXaB-2 Selecting /var/www/nextcloud/data Comparing . and None Getting delta of (. dir) and None A . Selection: examining path /var/www/nextcloud/data/.htaccess Selection: result: None from function: Command-line exclude glob: **Music/** Selection: + including file Selecting /var/www/nextcloud/data/.htaccess Comparing .htaccess and None Getting delta of (.htaccess reg) and None tail of log: File duplicity-full.20200609T163034Z.vol124.difftar.gpg is part of known set Found backup chain [Tue Jun 9 17:30:34 2020]-[Tue Jun 9 17:30:34 2020] --------------[ Backup Statistics ]-------------- StartTime 1591720234.88 (Tue Jun 9 17:30:34 2020) EndTime 1591757495.26 (Wed Jun 10 03:51:35 2020) ElapsedTime 37260.39 (10 hours 21 minutes) SourceFiles 439667 SourceFileSize 201386713287 (188 GB) NewFiles 439667 NewFileSize 201386713287 (188 GB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 439667 RawDeltaSize 201207496903 (187 GB) TotalDestinationSizeChange 198328583475 (185 GB) Errors 0 ------------------------------------------------- Releasing lockfile /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/lockfile Removing still remembered temporary file /tmp/duplicity-bWiKym- tempdir/mkstemp-sJQtEF-1 Releasing lockfile /var/www/.cache/duplicity/db7c0f583d45b95c45d4d76e12d54364/lockfile ```",6 118022206,2020-05-20 14:30:55.253,override_dh_auto_install fails when building in a virtualenv (lp:#1879720),"[Original report](https://bugs.launchpad.net/bugs/1879720) created by **Mischa ter Smitten (mischa-ter-smitten)** ``` I'm trying to build duplicity (version 0.7.19) in a virtualenv. I'm using the following steps: sudo apt install build-essential debhelper devscripts equivs dh-virtualenv; mkvirtualenv -ppython2 ANSPB-453; pip install --upgrade pip; pip install make-deb; pip install -r requirements.txt; sudo apt-get install librsync-dev par2 rdiff; # dpkg-buildpackage -us -uc; dpkg-buildpackage -us -uc -f; The last command fails on: rm -r debian/duplicity/usr/share/doc/duplicity-* when building in a virtualenv this directory does not exist. debian/duplicity/home/mtersmitten/.virtualenvs/ANSPB-453/share/doc/duplicity-0.7.19 however does. changing the rm command to: find debian/duplicity -path ""*share/doc/*"" -name ""duplicity-*"" -print0 | xargs --no-run-if-empty -0 rm -r fixes the issue. ```",6 118022195,2020-04-09 05:34:01.748,Duplicity fails backups at the end every time (lp:#1871757),"[Original report](https://bugs.launchpad.net/bugs/1871757) created by **Kenneth Loafman (kenneth-loafman)** ``` Backup seems to progress well (it takes a long time) and ends up with this error: Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1393, in main do_backup(action) File ""/usr/bin/duplicity"", line 1472, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 728, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 107, in filter_path_iter for path in path_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 764, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 8 --- ProblemType: Bug ApportVersion: 2.20.9-0ubuntu7.14 Architecture: amd64 CurrentDesktop: ubuntu:GNOME DistroRelease: Ubuntu 18.04 InstallationDate: Installed on 2017-12-11 (849 days ago) InstallationMedia: Ubuntu 17.10 ""Artful Aardvark"" - Release amd64 (20171018) Package: duplicity 0.7.17-0ubuntu1.1 PackageArchitecture: amd64 ProcVersionSignature: Ubuntu 5.3.0-42.34~18.04.1-generic 5.3.18 Tags: bionic Uname: Linux 5.3.0-42-generic x86_64 UpgradeStatus: Upgraded to bionic on 2018-06-05 (673 days ago) UserGroups: adm cdrom dip docker input lpadmin plugdev sambashare sudo wireshark _MarkForUpload: True ``` Original tags: apport-collected bionic",6 118022172,2020-01-18 07:31:19.383,Add multi core support for compression (lp:#1860200),"[Original report](https://bugs.launchpad.net/bugs/1860200) created by **Byron (byronester)** ``` Duplicity compresses and decompresses with a single process atm. Performance can be increased by parallelizing this process. This in turn could be done using something like mgzip. https://code.launchpad.net/~byronester/duplicity/duplicity ```",16 118022168,2020-01-10 16:07:48.616,Duplicity 0.8.09 seems not work with S3 + https proxy (python 3 / boto bug ?) (lp:#1859200),"[Original report](https://bugs.launchpad.net/bugs/1859200) created by **Gaël Bréard (gbrd)** ``` Duplicity 0.8.09 (snap release 48) seems not work with S3 + https proxy (python 3 / boto bug ?) It may be related to a known bug on boto / python3 ? https://github.com/boto/boto/issues/3561 sock.sendall(""CONNECT %s HTTP/1.0\r\n"" % host) TypeError: a bytes-like object is required, not 'str' It work with version 0.8.08 / snap release 33 ```",6 118022467,2019-11-28 12:09:56.559,Large data is failed to take backup with duplicty (lp:#1854351),"[Original report](https://bugs.launchpad.net/bugs/1854351) created by **Gangadhar (gangadhar24)** ``` We have the 103 GB data in volume and trying to take the backup, but it is taking too much time and failed to take the backup of entire data. duplicity 0.8.03 Data size 103 GB endpoint is s3.us-south.objectstorage.softlayer.net python version 3.7.3 cat /etc/os-release NAME=""Alpine Linux"" ID=alpine VERSION_ID=3.10.1 Note : one of our customer failed to take the backup of 130GB size. ```",6 118019420,2019-11-22 20:39:22.020,Duplicity doesn't install all dependancies (lp:#1853650),"[Original report](https://bugs.launchpad.net/bugs/1853650) created by **Magnus (mxvalle)** ``` Installing the duplicity package does not include all it's dependencies, specifically the B2 backend. When running a command like this with a freshly installed duplicity on a freshly installed Ubuntu 19.10 `$ duplicity Photos/ b2://123:567@photos` it returns the following error `BackendException: B2 backend requires B2 Python APIs (pip install b2)` ``` Original tags: packaging",12 118019416,2019-11-14 15:18:41.473,PCA and Swift backends don't make use of etag (md5) (lp:#1852597),"[Original report](https://bugs.launchpad.net/bugs/1852597) created by **Kenneth Loafman (kenneth-loafman)** ``` Do PCA and Swift backends make use of etag (md5 checksum) on upload? (I think they don't) If not, wouldn't that be beneficial? From OVH Public Cloud Archive reference for developers ( https://docs.ovh.com/gb/en/storage/pca/dev/#uploading-an-archive ): ""When uploading an archive to OVH Public Cloud Archive, it is very important to verify that both the local and remote copy of the data are indentical. This is a guarantee data received remotely is correct and that nobody has been able to alter its content."" ```",8 118022164,2019-11-09 18:05:37.627,Ubuntu 19.10 TypeError: Cannot use string pattern on bytes-like object (lp:#1851951),"[Original report](https://bugs.launchpad.net/bugs/1851951) created by **Mike Doneske (mdoneske)** ``` Duplicity Version: 0.8.04 Python Version: Python3 3.7.5rc1 Ubuntu 19.10 Target Filesystem: Linux Using archive dir: /home/mike/.cache/duplicity/ab8846365d1058b163dab4ef33567f9b Using backup name: ab8846365d1058b163dab4ef33567f9b GPG binary is gpg, version (2, 2, 12) Import of duplicity.backends.adbackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.jottacloudbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pcabackend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded /usr/lib/python3/dist- packages/duplicity/backends/ssh_paramiko_backend.py:409: ResourceWarning: unclosed file <_io.TextIOWrapper name='/etc/ssh/ssh_config' mode='r' encoding='UTF-8'>   sshconfig.parse(open(file)) ResourceWarning: Enable tracemalloc to get the object allocation traceback ssh: starting thread (client mode): 0xf211ba90 ssh: Local version/idstring: SSH-2.0-paramiko_2.6.0 ssh: Remote version/idstring: SSH-2.0-OpenSSH_7.9p1 Ubuntu-10 ssh: Connected (version 2.0, client OpenSSH_7.9p1) ssh: kex algos:['curve25519-sha256', 'curve25519-sha256@libssh.org', 'ecdh- sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie- hellman-group-exchange-sha256', 'diffie-hellman-group16-sha512', 'diffie- hellman-group18-sha512', 'diffie-hellman-group14-sha256', 'diffie-hellman- group14-sha1'] server key:['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa', 'ecdsa-sha2-nistp256', 'ssh-ed25519'] client encrypt:['chacha20-poly1305@openssh.com', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr', 'aes128-gcm@openssh.com', 'aes256-gcm@openssh.com'] server encrypt:['chacha20-poly1305@openssh.com', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr', 'aes128-gcm@openssh.com', 'aes256-gcm@openssh.com'] client mac:['umac-64-etm@openssh.com', 'umac-128-etm@openssh.com', 'hmac- sha2-256-etm@openssh.com', 'hmac-sha2-512-etm@openssh.com', 'hmac- sha1-etm@openssh.com', 'umac-64@openssh.com', 'umac-128@openssh.com', 'hmac-sha2-256', 'hmac-sha2-512', 'hmac-sha1'] server mac:['umac-64-etm@openssh.com', 'umac-128-etm@openssh.com', 'hmac- sha2-256-etm@openssh.com', 'hmac-sha2-512-etm@openssh.com', 'hmac- sha1-etm@openssh.com', 'umac-64@openssh.com', 'umac-128@openssh.com', 'hmac-sha2-256', 'hmac-sha2-512', 'hmac-sha1'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False ssh: Kex agreed: curve25519-sha256@libssh.org ssh: HostKey agreed: ecdsa-sha2-nistp256 ssh: Cipher agreed: aes128-ctr ssh: MAC agreed: hmac-sha2-256 ssh: Compression agreed: none ssh: kex engine KexCurve25519 specified hash_algo ssh: Switch to new keys ... ssh: userauth is OK ssh: Authentication (password) successful! ssh: [chan 0] Max packet in: 32768 bytes ssh: Received global request ""hostkeys-00@openssh.com"" ssh: Rejecting ""hostkeys-00@openssh.com"" global request from server. ssh: [chan 0] Max packet out: 32768 bytes ssh: Secsh channel 0 opened. ssh: [chan 0] Sesch channel 0 request ok ssh: [chan 0] EOF received (0) ssh: [chan 0] EOF sent (0) Reading globbing filelist /home/mike/backup_filelist Main action: inc Acquiring lockfile b'/home/mike/.cache/duplicity/ab8846365d1058b163dab4ef33567f9b/lockfile' ================================================================================ duplicity $version ($reldate) Args: /usr/bin/duplicity --verbosity debug --asynchronous-upload --include- filelist /home/mike/backup_filelist --volsize 500 / scp://mike@192.168.1.57//backups/golf Linux golf 5.3.0-19-generic #20-Ubuntu SMP Fri Oct 18 09:04:39 UTC 2019 x86_64 x86_64 /usr/bin/python3 3.7.5rc1 (default, Oct 8 2019, 16:47:45) [GCC 9.2.1 20191008] ================================================================================ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Fri Jul 19 05:00:03 2019 Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /home/mike/.cache/duplicity/ab8846365d1058b163dab4ef33567f9b Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Fri Jul 19 05:00:03 2019 Chain end time: Fri Oct 18 05:00:01 2019 Number of contained backup sets: 14 Total number of contained volumes: 58  Type of backup set: Time: Num volumes:                 Full Fri Jul 19 05:00:03 2019 5          Incremental Fri Jul 26 05:00:02 2019 20          Incremental Fri Aug 2 05:00:02 2019 1          Incremental Fri Aug 9 05:00:01 2019 1          Incremental Fri Aug 16 05:00:02 2019 2          Incremental Fri Aug 23 05:00:03 2019 1          Incremental Fri Aug 30 05:00:01 2019 1          Incremental Fri Sep 6 05:00:02 2019 1          Incremental Fri Sep 13 05:00:02 2019 2          Incremental Fri Sep 20 05:00:04 2019 17          Incremental Fri Sep 27 05:00:03 2019 2          Incremental Fri Oct 4 05:00:02 2019 3          Incremental Fri Oct 11 05:00:02 2019 1          Incremental Fri Oct 18 05:00:01 2019 1 ------------------------- No orphaned or incomplete backup sets found. Registering (mktemp) temporary file /tmp/duplicity-77dapwv4-tempdir/mktemp- ylgpl9mx-2 ssh: [chan 3] Max packet in: 32768 bytes ssh: [chan 3] Max packet out: 32768 bytes ssh: Secsh channel 3 opened. ssh: [chan 3] Sesch channel 3 request ok ssh: [chan 3] EOF received (3) Backtrace of previous error: Traceback (innermost last):   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 371, in inner_retry     return fn(self, *args)   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 554, in get     self.backend._get(remote_filename, local_path)   File ""/usr/lib/python3/dist- packages/duplicity/backends/ssh_paramiko_backend.py"", line 338, in _get     m = re.match(r""C([0-7]{4})\s+(\d+)\s+(\S.*)$"", msg)   File ""/usr/lib/python3.7/re.py"", line 173, in match     return _compile(pattern, flags).match(string)  TypeError: cannot use a string pattern on a bytes-like object ssh: [chan 3] EOF sent (3) Attempt 1 failed. TypeError: cannot use a string pattern on a bytes-like object ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 4] Max packet in: 32768 bytes ssh: [chan 4] Max packet out: 32768 bytes ssh: Secsh channel 4 opened. ssh: [chan 4] Sesch channel 4 request ok Backtrace of previous error: Traceback (innermost last):   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 371, in inner_retry     return fn(self, *args)   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 554, in get     self.backend._get(remote_filename, local_path)   File ""/usr/lib/python3/dist- packages/duplicity/backends/ssh_paramiko_backend.py"", line 338, in _get     m = re.match(r""C([0-7]{4})\s+(\d+)\s+(\S.*)$"", msg)   File ""/usr/lib/python3.7/re.py"", line 173, in match     return _compile(pattern, flags).match(string)  TypeError: cannot use a string pattern on a bytes-like object Attempt 2 failed. TypeError: cannot use a string pattern on a bytes-like object ssh: [chan 4] EOF received (4) ssh: [chan 4] EOF sent (4) ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 5] Max packet in: 32768 bytes ssh: [chan 5] Max packet out: 32768 bytes ssh: Secsh channel 5 opened. ssh: [chan 5] Sesch channel 5 request ok ssh: [chan 5] EOF received (5) ssh: [chan 5] EOF sent (5) Backtrace of previous error: Traceback (innermost last):   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 371, in inner_retry     return fn(self, *args)   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 554, in get     self.backend._get(remote_filename, local_path)   File ""/usr/lib/python3/dist- packages/duplicity/backends/ssh_paramiko_backend.py"", line 338, in _get     m = re.match(r""C([0-7]{4})\s+(\d+)\s+(\S.*)$"", msg)   File ""/usr/lib/python3.7/re.py"", line 173, in match     return _compile(pattern, flags).match(string)  TypeError: cannot use a string pattern on a bytes-like object Attempt 3 failed. TypeError: cannot use a string pattern on a bytes-like object ssh: Sending global request ""keepalive@lag.net"" ssh: [chan 6] Max packet in: 32768 bytes ssh: [chan 6] Max packet out: 32768 bytes ssh: Secsh channel 6 opened. ssh: [chan 6] Sesch channel 6 request ok ssh: [chan 6] EOF received (6) Backtrace of previous error: Traceback (innermost last):   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 371, in inner_retry     return fn(self, *args)   File ""/usr/lib/python3/dist-packages/duplicity/backend.py"", line 554, in get     self.backend._get(remote_filename, local_path)   File ""/usr/lib/python3/dist- packages/duplicity/backends/ssh_paramiko_backend.py"", line 338, in _get     m = re.match(r""C([0-7]{4})\s+(\d+)\s+(\S.*)$"", msg)   File ""/usr/lib/python3.7/re.py"", line 173, in match     return _compile(pattern, flags).match(string)  TypeError: cannot use a string pattern on a bytes-like object ```",12 118022162,2019-10-22 14:49:13.279,FilePrefixError on all --exclude (lp:#1849335),"[Original report](https://bugs.launchpad.net/bugs/1849335) created by **Jay Bienvenu (jbnv)** ``` I'm getting this error for the --exclude option, no matter what I specify to exclude and even if I --exclude-filelist. Here's an example of my trace. I'm using duplicity 0.7.17, Ubuntu 18.04.3 LTS, Python 2.7.15+. > duplicity --exclude .git ~/source_folder file:///target Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1380, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1149, in ProcessCommandLine set_selection() File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 973, in set_selection sel.ParseArgs(select_opts, select_files) File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 250, in ParseArgs self.add_selection_func(self.glob_get_sf(arg, 0)) File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 434, in glob_get_sf sel_func = self.glob_get_filename_sf(glob_str, include) File ""/usr/lib/python2.7/dist-packages/duplicity/selection.py"", line 502, in glob_get_filename_sf raise FilePrefixError(filename) FilePrefixError: .git > duplicity -v9 Command line error: Expected 2 args, got 0 Enter 'duplicity --help' for help screen. Using temporary directory /tmp/duplicity-oupQbS-tempdir ```",6 118022160,2019-10-18 08:33:53.601,Fresh ubuntu 19.10 install PyDrive error (lp:#1848669),"[Original report](https://bugs.launchpad.net/bugs/1848669) created by **Ioan Cristea (krioft)** ``` The error is: BackendException: PyDrive backend requires PyDrive installation. Please read the manpage for setup details. Exception: No module named 'apiclient' DV: 40.1-1ubuntu2 Ubuntu 19.10 Linux python -V Python 2.7.17rc1 ```",10 118022156,2019-09-09 21:15:03.507,SSE-C support (for AWS S3 backend) (lp:#1843343),"[Original report](https://bugs.launchpad.net/bugs/1843343) created by **Ivan Kurnosov (zerkms)** ``` AWS S3 SSE-C provides a better protection and I believe it would be convenient to have its support together with SSE that is already available. https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html Thanks ```",6 118022154,2019-07-31 21:25:05.614,NameError: name 'unicode' is not defined (lp:#1838574),"[Original report](https://bugs.launchpad.net/bugs/1838574) created by **Stephan Müller (smueller18)** ``` Python 3 does not natively support the function unicode(). In onedrivebacken.py this function is used which results in errors during excecution. In Python 3, all occurences of unicode() must be replaced with str(). In some projects I found the following code snippet which also may a solution: PY3 = sys.version_info[0] == 3 if PY3: unicode = str Stacktrace: Attempt 1 failed. NameError: name 'unicode' is not defined Level 8:duplicity:Attempt 1 failed. NameError: name 'unicode' is not defined Traceback (most recent call last): File ""/home/sm/git/lab/smueller18/docker- duplicity/.venv/lib/python3.7/site-packages/duplicity/backend.py"", line 375, in inner_retry return fn(self, *args) File ""/home/sm/git/lab/smueller18/docker- duplicity/.venv/lib/python3.7/site-packages/duplicity/backend.py"", line 535, in put self.__do_put(source_path, remote_filename) File ""/home/sm/git/lab/smueller18/docker- duplicity/.venv/lib/python3.7/site-packages/duplicity/backend.py"", line 521, in __do_put self.backend._put(source_path, remote_filename) File ""/home/sm/git/lab/smueller18/docker- duplicity/.venv/lib/python3.7/site- packages/duplicity/backends/onedrivebackend.py"", line 260, in _put u'Content-Length': unicode(len(chunk)), NameError: name 'unicode' is not defined Environment: duplicity 0.8.02 Python 3.7.3 Ubuntu 19.04 ```",6 118022150,2019-07-25 15:53:58.315,update ppa version (lp:#1837918),"[Original report](https://bugs.launchpad.net/bugs/1837918) created by **Rune Philosof (olberd)** ``` It would be nice to have version 0.8.02 in the duplicity ppa. Even nicer to have it in Debian and Ubuntu. ```",8 118022143,2019-07-19 09:50:26.176,"Permanent ""Another duplicity instance is already running"" (lp:#1837201)","[Original report](https://bugs.launchpad.net/bugs/1837201) created by **Eoghan Murray (eoghan-n)** ``` For many weeks now my duplicity instance (started via deja-dup) has been failing with ""Another duplicity instance is already running with this archive directory"" My storage location is on a network which is automatically mounted I checked the folder for the presence of a lockfile (after reading the code) and there was none there. It was only after adding extra info as follows: ` log.FatalError( ""Another duplicity instance is already running with this archive directory: %s \n"" % (globals.lockpath), ` That I discovered that globals.lockpath is set to `~/.cache/deja- dup/3ac8a04e2f4c8feb215d2e9f8eb12645/` Should there be something in duplicity to test the age of the lockpath, or actually test it for an actively running process, e.g. store the PID and check (using ps) whether that process is running? Apologies if this is a concern of deja-dup. ```",14 118022141,2019-07-16 17:37:51.953,typeerror: expected string or buffer from mega (lp:#1836785),"[Original report](https://bugs.launchpad.net/bugs/1836785) created by **David Parkin (davidmichaelparkin)** ``` Duplicity 0.4.14 Python 2.7.14 Commandline or args built around line 158 is a list. It appears that megatools expects a string or buffer. I used join to create a string and it appears to work. I notice that the code hasn't changed so I assume no-one uses mega or I've something wrong. ```",6 118022139,2019-07-10 23:05:37.475,--dry-run does not work with restore (lp:#1836118),"[Original report](https://bugs.launchpad.net/bugs/1836118) created by **Jake Herrmann (jtherrmann-deactivatedaccount)** ``` The --dry-run option does not seem to work with the restore command. This is the command I'm trying to run: duplicity restore -vi --sign-key --force $HOME --dry-run duplicity prints some info and then prompts for my passphrase, as usual. After I enter my passphrase, the program exits without printing anything else. I would expect it to instead print all of the files that would be restored (because I've enabled info-level verbosity). I'm running duplicity 0.7.11 on Debian GNU/Linux 9 (stretch). ```",6 118022134,2019-06-24 09:21:12.426,0.8.00 testsuite fails with python 3.5.3 and gnupg 2.1.18 (lp:#1833998),"[Original report](https://bugs.launchpad.net/bugs/1833998) created by **az (az-debian)** ``` with python 3.5.3 and gnupg 2.1.18 the testsuite repeatably fails unit/test_collections.py and almost all of unit/test_gpg.py and unit/test_gpginterface.py. the error messages all point to the same bit of code at duplicity/gpginterface.py:447, so i'm just including the first one: =================================== FAILURES =================================== _____________________ CollectionTest.test_sigchain_fileobj _____________________ self = @ pytest.mark.usefixtures(u""redirect_stdin"") def test_sigchain_fileobj(self): u""""""Test getting signature chain fileobjs from archive_dir_path"""""" > self.set_gpg_profile() unit/test_collections.py:189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ unit/test_collections.py:97: in set_gpg_profile self.set_global(u'gpg_profile', gpg.GPGProfile(passphrase=u""foobar"")) ../duplicity/gpg.py:95: in __init__ self.gpg_version = self.get_gpg_version(globals.gpg_binary) ../duplicity/gpg.py:111: in get_gpg_version res = gnupg.run([u""--version""], create_fhs=[u""stdout""]) ../duplicity/gpginterface.py:375: in run create_fhs, attach_fhs) ../duplicity/gpginterface.py:427: in _attach_fork_exec self._as_child(process, gnupg_commands, args) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = process = gnupg_commands = ['--version'], args = [] def _as_child(self, process, gnupg_commands, args): u""""""Stuff run after forking in child"""""" # child for std in _stds: p = process._pipes[std] > os.dup2(p.child, getattr(sys, u""__%s__"" % std).fileno()) E ValueError: underlying buffer has been detached ../duplicity/gpginterface.py:447: ValueError with python 3.7.3 and gnupg 2.2.12 or 2.2.13 those same tests survive just fine. ```",6 118022132,2019-06-20 13:20:57.794,[0.8] ParseArgsTest.test_unicode_paths_non_globbing is failing (lp:#1833562),"[Original report](https://bugs.launchpad.net/bugs/1833562) created by **Sebastien Bacher (seb128)** ``` Using 0.8 r1377 Trying on Ubuntu Disco to see how it behave the tests fail with a $ cd /tmp $ bzr branch lp:duplicity $ cd duplicity $ python2.7 setup.py build --force $ PYTHONPATH=/tmp/duplicity/build/lib.linux-i686-2.7/ python2.7 ./setup.py test (that's on an i386 chroot, path to adapt for amd64) The tests have some errors, including that one self = def test_unicode_paths_non_globbing(self): u""""""Test functional test test_unicode_paths_non_globbing as a unittest"""""" self.root = Path(u""testfiles/select-unicode"") self.ParseTest([(u""--exclude"", u""testfiles/select- unicode/прыклад/пример/例/Παράδειγμα/उदाहरण.txt""), (u""--exclude"", u""testfiles/select- unicode/прыклад/пример/例/Παράδειγμα/דוגמא.txt""), (u""--exclude"", u""testfiles/select- unicode/прыклад/пример/例/მაგალითი/""), (u""--include"", u""testfiles/select- unicode/прыклад/пример/例/""), (u""--exclude"", u""testfiles/select- unicode/прыклад/пример/""), (u""--include"", u""testfiles/select- unicode/прыклад/""), (u""--include"", u""testfiles/select- unicode/օրինակ.txt""), (u""--exclude"", u""testfiles/select-unicode/**"")], [(), (u""прыклад"",), (u""прыклад"", u""пример""), (u""прыклад"", u""пример"", u""例""), (u""прыклад"", u""пример"", u""例"", u""Παράδειγμα""), (u""прыклад"", u""пример"", u""例"", u""Παράδειγμα"", u""ઉદાહરણ.log""), > (u""прыклад"", u""উদাহরণ""), (u""օրինակ.txt"",)]) unit/test_selection.py:887: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ unit/test_selection.py:234: in ParseTest results_as_list = list(Iter.map(self.uc_index_from_path, self.Select)) ../duplicity/lazy.py:49: in map yield function(i) _ _ _ _ _ _ _ _ _ _ _ _ ```",6 118022130,2019-06-20 13:19:04.941,TestSimpleUnicode.test_simple_unicode (lp:#1833561),"[Original report](https://bugs.launchpad.net/bugs/1833561) created by **Sebastien Bacher (seb128)** ``` Using 0.8 r1377 Trying on Ubuntu Disco to see how it behave the tests fail with a $ cd /tmp $ bzr branch lp:duplicity $ cd duplicity $ python2.7 setup.py build --force $ PYTHONPATH=/tmp/duplicity/build/lib.linux-i686-2.7/ python2.7 ./setup.py test (that's on an i386 chroot, path to adapt for amd64) The tests have some errors, including that one ____________________ TestSimpleUnicode.test_simple_unicode _____________________ self = def test_simple_unicode(self): u""""""Test simple unicode comparison"""""" self.assertEqual(inc_sel_file(u""прыклад/пример/例/Παράδειγμα/उदाहरण.txt"", > u""прыклад/пример/例/Παράδειγμα/उदाहरण.txt""), 1) E AssertionError: None != 1 unit/test_globmatch.py:249: AssertionError (TestSquareBrackets.test_square_bracket_options_unicode hits a similar error) ```",6 118022129,2019-06-19 16:39:06.442,AttributeError during backup (lp:#1833447),"[Original report](https://bugs.launchpad.net/bugs/1833447) created by **Jacob Mansfield (cyberjacob)** ``` Duplicity Version: 0.8 Python Version: 3.5.3 OS Distro: Debian OS Version: Stable (Stretch, 9.9) Source: Linux filesystem (ext4) Destination: Backblaze B2 Command line: /usr/local/bin/duplicity -v9 --full-if-older-than 14D --exclude /**.DS_Store --exclude /**Icon? --exclude /**.AppleDouble --include=/mnt/Pictures/ --include=/home/ --include=/mnt/Documents/ --include=/mnt/Usershare/ --include=/root/ --exclude=** / b2://xxxx:xxxx@xxxx/ Logs: [Removed lots of files being checked] Releasing lockfile b'/root/.cache/duplicity/09cf580279539188df4bbf911b749576/lockfile' Removing still remembered temporary file /tmp/duplicity-4u664jq5-tempdir/mkstemp-c0yaehv4-1 Removing still remembered temporary file /tmp/duplicity-4u664jq5-tempdir/mktemp-w_t_j867-4 Releasing lockfile b'/root/.cache/duplicity/09cf580279539188df4bbf911b749576/lockfile' Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1706, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1692, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1538, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1662, in do_backup full_backup(col_stats) File ""/usr/local/bin/duplicity"", line 568, in full_backup globals.backend) File ""/usr/local/bin/duplicity"", line 425, in write_multivol globals.volsize) File ""/usr/local/lib/python3.5/dist-packages/duplicity/gpg.py"", line 393, in GPGWriteFile data = block_iter.__next__().data File ""/usr/local/lib/python3.5/dist-packages/duplicity/diffdir.py"", line 543, in __next__ result = self.process(next(self.input_iter)) # pylint: disable=assignment-from-no-return File ""/usr/local/lib/python3.5/dist-packages/duplicity/diffdir.py"", line 681, in process data, last_block = self.get_data_block(fp) File ""/usr/local/lib/python3.5/dist-packages/duplicity/diffdir.py"", line 710, in get_data_block if fp.close(): File ""/usr/local/lib/python3.5/dist-packages/duplicity/diffdir.py"", line 460, in close self.callback(self.sig_gen.getsig(), *self.extra_args) File ""/usr/local/lib/python3.5/dist-packages/duplicity/diffdir.py"", line 141, in callback sigTarFile.addfile(ti, io.BytesIO(sig_string)) File ""/usr/lib/python3.5/tarfile.py"", line 1962, in addfile buf = tarinfo.tobuf(self.format, self.encoding, self.errors) File ""/usr/lib/python3.5/tarfile.py"", line 804, in tobuf return self.create_gnu_header(info, encoding, errors) File ""/usr/lib/python3.5/tarfile.py"", line 835, in create_gnu_header return buf + self._create_header(info, GNU_FORMAT, encoding, errors) File ""/usr/lib/python3.5/tarfile.py"", line 925, in _create_header stn(info.get(""gname"", """"), 32, encoding, errors), File ""/usr/lib/python3.5/tarfile.py"", line 156, in stn s = s.encode(encoding, errors) AttributeError: 'bytes' object has no attribute 'encode' Releasing lockfile b'/root/.cache/duplicity/09cf580279539188df4bbf911b749576/lockfile' Exception ignored in: .remove at 0x7efed9f0d840> Traceback (most recent call last): File ""/usr/lib/python3.5/weakref.py"", line 117, in remove TypeError: 'NoneType' object is not callable ```",6 118022127,2019-05-20 22:41:23.754,No helpful error message when `par2+` is forgotten (lp:#1829798),"[Original report](https://bugs.launchpad.net/bugs/1829798) created by **Marian Sigler (maix42)** ``` (I think this might be related to 1406173 but it seems different as it is old and marked as fixed) When backing up with par2 turned on (eg a par2+ssh://... url), *.par2 files get created. `I just wanted to restore from this and I forgot to add the `par2+` part (I used `file://...` instead of `par2+file://` and I got an AssertionError in `add_filename` (see log below) I know this is my fault and everything, but if it's not much of a hassle it would be cool if duplicity could recognize this situation and display some meaningful warning (something like ""Directory contains .par2 files. Did you mean to say par2+file:///...?"") /tmp/0N2AeR# duplicity restore file:///backups/xxx/duplicity/ restore/etc --file-to-restore etc -v9 Using archive dir: /root/.cache/duplicity/28xxxxef Using backup name: 28xxxxef GPG binary is gpg, version 2.2.15 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: restore Acquiring lockfile /root/.cache/duplicity/28xxxxef/lockfile ================================================================================ duplicity 0.7.18.2 (October 17, 2018) Args: /usr/bin/duplicity restore file:///backups/xxx/duplicity/ restore/etc --file-to-restore etc -v9 Linux dirk 5.0.13-arch1-1-ARCH #1 SMP PREEMPT Sun May 5 18:05:41 UTC 2019 x86_64 /usr/bin/python2 2.7.16 (default, Mar 11 2019, 18:59:25) [GCC 8.2.1 20181127] ================================================================================ Using temporary directory /tmp/duplicity-W5VLIR-tempdir Registering (mkstemp) temporary file /tmp/duplicity-W5VLIR-tempdir/mkstemp- xkcP1G-1 Temp has 5367005184 available, backup will use approx 272629760. Local and Remote metadata are synchronized, no sync needed. 658 files exist on backend 31 files exist in cache Extracting backup chains from list of files: [... # >200 archives, for each one there's three files: .gpg, .gpg.par2, .gpg.vol000+100.par2 ... Traceback (innermost last): File ""/usr/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1424, in do_backup action).set_values() File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 836, in get_backup_chains add_to_sets(f) File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 824, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({..., 42: 'duplicity- inc.20190325T105242Z.to.20190502T213407Z.vol42.difftar.gpg.par2', ...}, 'duplicity- inc.20190325T105242Z.to.20190502T213407Z.vol42.difftar.gpg.vol000+100.par2') ```",6 118022120,2019-05-13 04:06:30.004,Incremental backups do not validate passphrase (lp:#1828761),"[Original report](https://bugs.launchpad.net/bugs/1828761) created by **Tom Boshoven (tomboshoven)** ``` This was reported in a comment on another bug, but I think this is serious enough to warrant its own one. It broke my incremental backups and I just happened to notice. The report with repro steps is https://bugs.launchpad.net/duplicity/+bug/918489/comments/22 The expected behavior would be to fail on invalid passphrases on incremental backups (or otherwise make it very clear that this is the user's responsibility). I seem to remember testing this in the past and verifying that an error is raised. Right now, it leads to corruption and a beautiful stack trace followed by a hanging application when trying to verify or restore: GPG error detail: Traceback (innermost last): File ""/bin/duplicity"", line 1560, in with_tempdir(main) File ""/bin/duplicity"", line 1546, in with_tempdir fn() File ""/bin/duplicity"", line 1398, in main do_backup(action) File ""/bin/duplicity"", line 1479, in do_backup verify(col_stats) File ""/bin/duplicity"", line 875, in verify for backup_ropath, current_path in collated: File ""/usr/lib/python2.7/site-packages/duplicity/diffdir.py"", line 276, in collate2iters relem1 = riter1.next() File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/site-packages/duplicity/diffdir.py"", line 286, in collate2iters relem2 = riter2.next() File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 344, in next self.set_tarfile() File ""/usr/lib/python2.7/site-packages/duplicity/patchdir.py"", line 332, in set_tarfile assert not self.current_fp.close() File ""/usr/lib/python2.7/site-packages/duplicity/dup_temp.py"", line 227, in close assert not self.fileobj.close() File ""/usr/lib/python2.7/site-packages/duplicity/gpg.py"", line 305, in close self.gpg_failed() File ""/usr/lib/python2.7/site-packages/duplicity/gpg.py"", line 272, in gpg_failed raise GPGError(msg) GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: AES encrypted data gpg: encrypted with 1 passphrase gpg: decryption failed: Bad session key ===== End GnuPG log ===== GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: AES encrypted data gpg: encrypted with 1 passphrase gpg: decryption failed: Bad session key ===== End GnuPG log ===== If left unnoticed, the only way to go about it is manual recovery. I'm updating my backup scripts to do some basic validation of the passphrase to make sure that a simple typo will not break my backups in the future. duplicity 0.7.18.2 on Arch Linux, Python 3.7.3. ```",14 118022114,2019-04-07 19:49:39.831,failure to remove backups and missing error info (remove-all-but-n-full) (lp:#1823560),"[Original report](https://bugs.launchpad.net/bugs/1823560) created by **alexander (bcclsn)** ``` duplicity remove-all-but-n-full 2 --force fails with an error that is not specified... Traceback (innermost last): File ""/usr/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1424, in do_backup action).set_values() File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 836, in get_backup_chains add_to_sets(f) File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 824, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/site-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20190407T165143Z.vol1.difftar.gpg', 2: 'duplicity-full.20190407T165143Z.vol2.difftar.gpg', 10: 'duplicity- full.20190407T165143Z.vol10.difftar.gpg', 11: 'duplicity- full.20190407T165143Z.vol11.difftar.gpg', 12: 'duplicity- full.20190407T165143Z.vol12.difftar.gpg', 13: 'duplicity- full.20190407T165143Z.vol13.difftar.gpg', 14: 'duplicity- full.20190407T165143Z.vol14.difftar.gpg', 15: 'duplicity- full.20190407T165143Z.vol15.difftar.gpg', 16: 'duplicity- full.20190407T165143Z.vol16.difftar.gpg', 17: 'duplicity- full.20190407T165143Z.vol17.difftar.gpg', 18: 'duplicity- full.20190407T165143Z.vol18.difftar.gpg', 19: 'duplicity- full.20190407T165143Z.vol19.difftar.gpg', 20: 'duplicity- full.20190407T165143Z.vol20.difftar.gpg'}, 'duplicity- full.20190407T165143Z.vol20.difftar.gpg') duplicity version: 0.7.18.2 python version: 3.7.3 os distro: archlinux x64 ```",6 118022113,2019-04-04 16:11:36.066,backup and also restore failed (lp:#1823201),"[Original report](https://bugs.launchpad.net/bugs/1823201) created by **Erik Uitterdijk (heidehipper)** ``` Backup fails lately on OpenSuse Leap laptop, openSUSE Leap 15.0 with duplicity-0.7.17-lp150.1.1.x86_64 Hi, lately I have an issue with the application. I get the following error message and the backup fails: Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1393, in main do_backup(action) File ""/usr/bin/duplicity"", line 1472, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 728, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib64/python2.7/site- packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib64/python2.7/site- packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib64/python2.7/site- packages/duplicity/patchdir.py"", line 107, in filter_path_iter for path in path_iter: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib64/python2.7/site- packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 764, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 1 Also the restore of a file failed with the following error message, ""Failed with an unknown error"": Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1393, in main do_backup(action) File ""/usr/bin/duplicity"", line 1472, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 728, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 107, in filter_path_iter for path in path_iter: File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 764, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 5 Any help / suggestions appreciated. Regards, Erik the Netherlands ```",8 118022109,2019-03-28 11:08:37.414,Failed with an unknown error. Traceback (innermost last): (lp:#1822077),"[Original report](https://bugs.launchpad.net/bugs/1822077) created by **Morgan Read (mstuff)** ``` Backup failed after running for over 12 hours... So, it's going to start all over again - again (Please provide some way to manage auto initiated 'full backups' triggered to guard against corruption - they're killing my data usage and storage. See Bug #1821628) Traceback (innermost last): File ""/usr/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1516, in do_backup full_backup(col_stats) File ""/usr/bin/duplicity"", line 577, in full_backup globals.backend) File ""/usr/bin/duplicity"", line 459, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/lib64/python2.7/site-packages/duplicity/asyncscheduler.py"", line 146, in schedule_task return self.__run_synchronously(fn, params) File ""/usr/lib64/python2.7/site-packages/duplicity/asyncscheduler.py"", line 172, in __run_synchronously ret = fn(*params) File ""/usr/bin/duplicity"", line 458, in vol_num: put(tdp, dest_filename, vol_num), File ""/usr/bin/duplicity"", line 347, in put backend.put(tdp, dest_filename) File ""/usr/lib64/python2.7/site-packages/duplicity/backend.py"", line 395, in inner_retry % (n, e.__class__.__name__, util.uexc(e))) File ""/usr/lib64/python2.7/site-packages/duplicity/util.py"", line 82, in uexc return ufn(m) File ""/usr/lib64/python2.7/site-packages/duplicity/util.py"", line 63, in ufn return filename.decode(globals.fsencoding, 'replace') AttributeError: 'ProtocolError' object has no attribute 'decode' ```",8 118019245,2019-03-11 01:30:31.578,InvalidBackendURL error for B2 backend if application key contains a '/' character (lp:#1819390),"[Original report](https://bugs.launchpad.net/bugs/1819390) created by **Justin Warren (justin-eigenmagic)** ``` If your application key contains a '/' character, the parsing of the b2 URL fails with this error: InvalidBackendURL: Syntax error (port) in: b2://:@ User error detail: Traceback (innermost last): File ""/usr/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1385, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1091, in ProcessCommandLine args = parse_cmdline_options(cmdline_list) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 720, in parse_cmdline_options lpath, backend_url = args_to_path_backend(args[0], args[1]) #@ UnusedVariable File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 983, in args_to_path_backend arg1_is_backend, arg2_is_backend = backend.is_backend_url(arg1), backend.is_backend_url(arg2) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 170, in is_backend_url pu = ParsedUrl(url_string) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 306, in __init__ re.search('::[^:]+$', self.netloc), self.netloc)) Appears to be due to the regex in backend.py on line 306, but it's actually due to the result returned by urlparse.urlparse(). netloc gets set to the string of : and the portion of the applicationKey up to (but not including) the '/' character. Essentially, '/' isn't a valid password character as far as urlparse() is concerned. Workaround is to delete the applicationKey in Backblaze and generate a new one until you get an applicationKey that doesn't contain a '/' character. Duplicity version: 0.7.18.2, also appears in 0.7.17 from Ubuntu bionic Python version: 2.7.15rc1 OS: Ubuntu 18.04.2 LTS (bionic) Target filesystem: Backblaze ```",16 118022107,2019-03-02 19:53:18.362,"""No backup chains found"" restoring with PCA backend (lp:#1818355)","[Original report](https://bugs.launchpad.net/bugs/1818355) created by **Thierry B. (thierrybo2)** ``` Duplicity 0.8 (rev. 1348) /usr/bin/python2 2.7.13 (default, Sep 26 2018, 18:42:22) System: Host: thierrybo-desk Kernel: 4.19.0-0.bpo.2-amd64 x86_64 (64 bit) Desktop: Openbox 3.6.1 Distro: Devuan GNU/Linux ascii Just a test folder to backup:~/Sys/bzr.repositories/duplicity/bin/ ls ~/Sys/bzr.repositories/duplicity/bin/ duplicity duplicity.1 rdiffdir rdiffdir.1 config.json used: [ { ""description"": ""Cold storage"", ""url"": ""pca://thierrybo-desk_tests"", ""prefixes"": [""cold_""] }, { ""description"": ""Hot storage"", ""url"": ""swift://thierrybo-desk_tests_hot"", ""prefixes"": [""hot_""] } ] First backup : duplicity --verbosity notice --num-retries 3 --asynchronous-upload --volsize 100 --file-prefix-manifest 'hot_' --file-prefix-signature 'hot_' --file-prefix-archive 'cold_' ~/Sys/bzr.repositories/duplicity/bin/ ""multi://$HOME/.config/duplicity/config.json?mode=mirror&onfail=abort"" Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sun Feb 17 23:07:05 2019 GnuPG passphrase: Retype passphrase to confirm: --------------[ Backup Statistics ]-------------- StartTime 1551552750.93 (Sat Mar 2 19:52:30 2019) EndTime 1551552750.94 (Sat Mar 2 19:52:30 2019) ElapsedTime 0.01 (0.01 seconds) SourceFiles 5 SourceFileSize 156558 (153 KB) NewFiles 0 NewFileSize 0 (0 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 0 RawDeltaSize 0 (0 bytes) TotalDestinationSizeChange 110 (110 bytes) Errors 0 ------------------------------------------------- Then I try to restore rdiffdir in a new directory : duplicity --verbosity debug --file-to-restore rdiffdir ""multi://$HOME/.config/duplicity/config.json?mode=mirror&onfail=abort"" ""$HOME/Documents/restore_duplicity"" Using archive dir: /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6 Using backup name: 3236aff760bf135cad84bed69e2b2ff6 GPG binary is gpg, version (2, 1, 18) Import of duplicity.backends.adbackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.jottacloudbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pcabackend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded MultiBackend: use store pca://thierrybo-desk_tests Multibackend: register affinity for prefix cold_ MultiBackend: use store swift://thierrybo-desk_tests_hot Multibackend: register affinity for prefix hot_ Main action: restore Acquiring lockfile /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6/lockfile ================================================================================ duplicity $version ($reldate) Args: /usr/local/bin/duplicity --verbosity debug --file-to-restore rdiffdir multi:///home/thierrybo/.config/duplicity/config.json?mode=mirror&onfail=abort /home/thierrybo/Documents/restore_duplicity Linux thierrybo-desk 4.19.0-0.bpo.2-amd64 #1 SMP Debian 4.19.16-1~bpo9+1 (2019-02-07) x86_64 /usr/bin/python2 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516] ================================================================================ Using temporary directory /tmp/duplicity-oVwRTf-tempdir Registering (mkstemp) temporary file /tmp/duplicity-oVwRTf-tempdir/mkstemp- Zbuw0h-1 Temp has 847536128 available, backup will use approx 272629760. MultiBackend: list from pca://thierrybo-desk_tests: ['cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg', 'cold_duplicity- inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg'] MultiBackend: list from swift://thierrybo-desk_tests_hot: ['hot_duplicity- full-signatures.20190217T220705Z.sigtar.gpg', 'hot_duplicity- full.20190217T220705Z.manifest.gpg', 'hot_duplicity- inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg', 'hot_duplicity- inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg', 'hot_duplicity- inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg', 'hot_duplicity- inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg', 'hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg'] MultiBackend: combined list: ['hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg', 'hot_duplicity-full-signatures.20190217T220705Z.sigtar.gpg', 'cold_duplicity-inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg', 'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg', 'hot_duplicity-inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg', 'hot_duplicity-inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg', 'cold_duplicity-inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg', 'hot_duplicity-inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg', 'cold_duplicity-inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg', 'hot_duplicity-inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg', 'hot_duplicity-full.20190217T220705Z.manifest.gpg', 'cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg', 'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg', 'cold_duplicity-inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg'] Local and Remote metadata are synchronized, no sync needed. MultiBackend: list from pca://thierrybo-desk_tests: ['cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg', 'cold_duplicity- inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg', 'cold_duplicity-inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg'] MultiBackend: list from swift://thierrybo-desk_tests_hot: ['hot_duplicity- full-signatures.20190217T220705Z.sigtar.gpg', 'hot_duplicity- full.20190217T220705Z.manifest.gpg', 'hot_duplicity- inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg', 'hot_duplicity- inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg', 'hot_duplicity- inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg', 'hot_duplicity- inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg', 'hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg'] MultiBackend: combined list: ['hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg', 'hot_duplicity-full-signatures.20190217T220705Z.sigtar.gpg', 'cold_duplicity-inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg', 'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg', 'hot_duplicity-inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg', 'hot_duplicity-inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg', 'cold_duplicity-inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg', 'hot_duplicity-inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg', 'cold_duplicity-inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg', 'hot_duplicity-inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg', 'hot_duplicity-full.20190217T220705Z.manifest.gpg', 'cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg', 'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg', 'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg', 'cold_duplicity-inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg'] 15 files exist on backend 11 files exist in cache Extracting backup chains from list of files: [u'hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg', u'hot_duplicity-full-signatures.20190217T220705Z.sigtar.gpg', u'cold_duplicity- inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg', u'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg', u'hot_duplicity-inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg', u'hot_duplicity-inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg', u'cold_duplicity- inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg', u'hot_duplicity-inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg', u'cold_duplicity- inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg', u'hot_duplicity-inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg', u'hot_duplicity-full.20190217T220705Z.manifest.gpg', u'cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg', u'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg', u'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg', u'cold_duplicity- inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg'] File hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity-new- signatures.20190217T220705Z.to.20190217T221019Z.sigtar.gpg' File hot_duplicity-full-signatures.20190217T220705Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity-full- signatures.20190217T220705Z.sigtar.gpg' File cold_duplicity- inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'cold_duplicity- inc.20190217T231512Z.to.20190302T185220Z.vol1.difftar.gpg' File hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity-new- signatures.20190217T222947Z.to.20190217T231512Z.sigtar.gpg' File hot_duplicity-inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity- inc.20190217T220705Z.to.20190217T221019Z.manifest.gpg' File hot_duplicity-inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity- inc.20190217T222947Z.to.20190217T231512Z.manifest.gpg' File cold_duplicity- inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'cold_duplicity- inc.20190217T222947Z.to.20190217T231512Z.vol1.difftar.gpg' File hot_duplicity-inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity- inc.20190217T221019Z.to.20190217T222947Z.manifest.gpg' File cold_duplicity- inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'cold_duplicity- inc.20190217T220705Z.to.20190217T221019Z.vol1.difftar.gpg' File hot_duplicity-inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity- inc.20190217T231512Z.to.20190302T185220Z.manifest.gpg' File hot_duplicity-full.20190217T220705Z.manifest.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity- full.20190217T220705Z.manifest.gpg' File cold_duplicity-full.20190217T220705Z.vol1.difftar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'cold_duplicity- full.20190217T220705Z.vol1.difftar.gpg' File hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity-new- signatures.20190217T231512Z.to.20190302T185220Z.sigtar.gpg' File hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'hot_duplicity-new- signatures.20190217T221019Z.to.20190217T222947Z.sigtar.gpg' File cold_duplicity- inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg is not part of a known set; creating new set Ignoring file (rejected by backup set) 'cold_duplicity- inc.20190217T221019Z.to.20190217T222947Z.vol1.difftar.gpg' Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6 Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. PASSPHRASE variable not set, asking user. GnuPG passphrase: Releasing lockfile /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6/lockfile Removing still remembered temporary file /tmp/duplicity-oVwRTf- tempdir/mkstemp-Zbuw0h-1 Releasing lockfile /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6/lockfile Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1678, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1664, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1510, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1590, in do_backup restore(col_stats) File ""/usr/local/bin/duplicity"", line 724, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/bin/duplicity"", line 746, in restore_get_patched_rop_iter backup_chain = col_stats.get_backup_chain_at_time(time) File ""/usr/local/lib/python2.7/dist-packages/duplicity/collections.py"", line 1002, in get_backup_chain_at_time raise CollectionsError(u""No backup chains found"") CollectionsError: No backup chains found Releasing lockfile /home/thierrybo/.cache/duplicity/3236aff760bf135cad84bed69e2b2ff6/lockfile ```",6 118023011,2019-02-16 12:50:38.760,duplicity does not handle accentuated character for folder path to save (lp:#1816232),"[Original report](https://bugs.launchpad.net/bugs/1816232) created by **Thierry B. (thierrybo2)** ``` Duplicity version 0.8-series. (rev 1348) Pyhon 2.7 Distro: Devuan GNU/Linux ascii duplicity --verbosity debug --num-retries 3 --asynchronous-upload --cf- backend pca --volsize 100 ~/Téléchargements/ swift://thierrybo-desk_tests /usr/local/bin/duplicity:69: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if u'--pydevd' in sys.argv or os.getenv(u'PYDEVD', None): Using temporary directory /tmp/duplicity-_0Apzy-tempdir Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1678, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1664, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1497, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1146, in ProcessCommandLine args = parse_cmdline_options(cmdline_list) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 698, in parse_cmdline_options possible = [c for c in commands if c.startswith(cmd)] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128) If I replace ""~/Téléchargements/"" with a path without accented characters it works ```",18 118022102,2019-02-15 16:53:50.282,restoring in presence of a looong inc backup chain fails (lp:#1816130),"[Original report](https://bugs.launchpad.net/bugs/1816130) created by **Wolfgang Rohdewald (wolfgang-rohdewald)** ``` Execute the script below, it creates such a pathological backup. Of course put something correct in for the GPG key. This is most certainly not the shortest possible script and not the smallest number of inc backup needed. I did that as root. #!/bin/bash rm -rf /etc/duply/test mkdir /etc/duply/test cat >/etc/duply/test/conf < duply_test/source/counter duply test backup counter=$((counter + 1)) done duply test restore restored This will give you File ""/usr/local/bin/duplicity"", line 733, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/local/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/local/bin/duplicity"", line 770, in get_fileobj_iter manifest.volume_info_dict[vol_num]) File ""/usr/local/bin/duplicity"", line 813, in restore_get_enc_fileobj fileobj = tdp.filtered_open_with_delete(""rb"") File ""/usr/local/lib/python2.7/dist-packages/duplicity/dup_temp.py"", line 120, in filtered_open_with_delete fh = FileobjHooked(path.DupPath.filtered_open(self, mode)) File ""/usr/local/lib/python2.7/dist-packages/duplicity/path.py"", line 779, in filtered_open return gpg.GPGFile(False, self, gpg_profile) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 225, in __init__ 'logger': self.logger_fp}) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpginterface.py"", line 374, in run create_fhs, attach_fhs) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpginterface.py"", line 402, in _attach_fork_exec pipe = os.pipe() OSError: [Errno 24] Too many open files ```",6 118022099,2019-02-07 20:31:54.836,Duplicity segfault during full backup over FTP (lp:#1815130),"[Original report](https://bugs.launchpad.net/bugs/1815130) created by **Johanan Idicula (jidicula)** ``` Hi everyone, I'm getting a segfault in the middle of an initial full backup. I've included the output with the -v9 option, including only the first and last 200 lines of the output. I've changed usernames and passwords for privacy. This is the first bug I've reported on Launchpad, so please let me know if there is any other information you would like me to provide. Thanks! ----- duplicity 0.7.11 Python 2.7.13 Linux 4.9.0-8-amd64 #1 Debian 4.9.130-2 target filesystem: Seagate NAS OS (device is Seagate 6-bay NAS Pro, Linux- based) root@SERVER_HOSTNAME:/home/USERNAME# duplicity --no-encryption -v9 full / ftp://USERNAME@192.168.1.64/ADMIN/SERVER_HOSTNAME/ Using archive dir: /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a Using backup name: 426601fb2366abecaf071bf54573b38a Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded LFTP version is 4.7.4 Password for 'USERNAME@192.168.1.64': Using temporary directory /tmp/duplicity-ZvvXnx-tempdir Registering (mkstemp) temporary file /tmp/duplicity-ZvvXnx-tempdir/mkstemp- IGvjCS-1 SETTINGS: set ssl:verify-certificate true set ftp:ssl-allow false set http:use-propfind true set net:timeout 30 set net:max-retries 5 set ftp:passive-mode on debug open -u 'USERNAME,PASSWORD' ftp://192.168.1.64 Main action: full ================================================================================ duplicity 0.7.11 (December 31, 2016) Args: /usr/bin/duplicity --no-encryption -v9 full / ftp://USERNAME@192.168.1.64/ADMIN/SERVER_HOSTNAME/ Linux SERVER_HOSTNAME 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 /usr/bin/python 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516] ================================================================================ Registering (mkstemp) temporary file /tmp/duplicity-ZvvXnx-tempdir/mkstemp- _bVkiE-2 Temp has 18869755904 available, backup will use approx 272629760. CMD: lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp-IGvjCS-1; ( cd ADMIN/SERVER_HOSTNAME/ && ls ) || ( mkdir -p ADMIN/SERVER_HOSTNAME/ && cd ADMIN/SERVER_HOSTNAME/ && ls )"" Reading results of 'lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp- IGvjCS-1; ( cd ADMIN/SERVER_HOSTNAME/ && ls ) || ( mkdir -p ADMIN/SERVER_HOSTNAME/ && cd ADMIN/SERVER_HOSTNAME/ && ls )""' STDERR: ---- Resolving host address... ---- 1 address found: 192.168.1.64 ---- Connecting to 192.168.1.64 (192.168.1.64) port 21 <--- 220 ProFTPD 1.3.5 Server (ADMIN1) [::ffff:192.168.1.64] ---> FEAT <--- 211-Features: <--- MFF modify;UNIX.group;UNIX.mode; <--- REST STREAM <--- MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; <--- UTF8 <--- LANG en-US* <--- EPRT <--- EPSV <--- MDTM <--- TVFS <--- MFMT <--- SIZE <--- 211 End ---> LANG <--- 500 Unable to handle command ---> OPTS UTF8 ON <--- 200 UTF8 set to on initialized translation from ANSI_X3.4-1968 to UTF-8 initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---> OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner <--- 200 OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode; ---> USER USERNAME <--- 331 Password required for USERNAME ---> PASS PASSWORD <--- 230 User USERNAME logged in ---> PWD <--- 257 ""/"" is the current directory ---- CWD path to be sent is `/ADMIN/SERVER_HOSTNAME' ---> CWD /ADMIN/SERVER_HOSTNAME <--- 250 CWD command successful ---> EPSV <--- 229 Entering Extended Passive Mode (|||38427|) ---- Connecting data socket to (192.168.1.64) port 38427 ---- Data connection established ---> LIST <--- 150 Opening ASCII mode data connection for file list initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---- Got EOF on data connection ---- Closing data socket <--- 226 Transfer complete ---> QUIT <--- 221 Goodbye. ---- Closing control socket STDOUT: Local and Remote metadata are synchronized, no sync needed. CMD: lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp-IGvjCS-1; ( cd ADMIN/SERVER_HOSTNAME/ && ls ) || ( mkdir -p ADMIN/SERVER_HOSTNAME/ && cd ADMIN/SERVER_HOSTNAME/ && ls )"" Reading results of 'lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp- IGvjCS-1; ( cd ADMIN/SERVER_HOSTNAME/ && ls ) || ( mkdir -p ADMIN/SERVER_HOSTNAME/ && cd ADMIN/SERVER_HOSTNAME/ && ls )""' STDERR: ---- Resolving host address... ---- 1 address found: 192.168.1.64 ---- Connecting to 192.168.1.64 (192.168.1.64) port 21 <--- 220 ProFTPD 1.3.5 Server (ADMIN1) [::ffff:192.168.1.64] ---> FEAT <--- 211-Features: <--- MFF modify;UNIX.group;UNIX.mode; <--- REST STREAM <--- MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; <--- UTF8 <--- LANG en-US* <--- EPRT <--- EPSV <--- MDTM <--- TVFS <--- MFMT <--- SIZE <--- 211 End ---> LANG <--- 500 Unable to handle command ---> OPTS UTF8 ON <--- 200 UTF8 set to on initialized translation from ANSI_X3.4-1968 to UTF-8 initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---> OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner <--- 200 OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode; ---> USER USERNAME <--- 331 Password required for USERNAME ---> PASS PASSWORD <--- 230 User USERNAME logged in ---> PWD <--- 257 ""/"" is the current directory ---- CWD path to be sent is `/ADMIN/SERVER_HOSTNAME' ---> CWD /ADMIN/SERVER_HOSTNAME <--- 250 CWD command successful ---> EPSV <--- 229 Entering Extended Passive Mode (|||15614|) ---- Connecting data socket to (192.168.1.64) port 15614 ---- Data connection established ---> LIST <--- 150 Opening ASCII mode data connection for file list initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---- Got EOF on data connection ---- Closing data socket <--- 226 Transfer complete ---> QUIT <--- 221 Goodbye. ---- Closing control socket STDOUT: 0 files exist on backend 2 files exist in cache Extracting backup chains from list of files: [] Last full backup date: none Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a Found 0 secondary backup chains. No backup chains with active signatures found No orphaned or incomplete backup sets found. Using temporary directory /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a/duplicity-ue3jAD- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a/duplicity-ue3jAD- tempdir/mktemp-siv12H-1 Using temporary directory /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a/duplicity-jLIvWS- tempdir Registering (mktemp) temporary file /root/.cache/duplicity/426601fb2366abecaf071bf54573b38a/duplicity-jLIvWS- tempdir/mktemp-E9VXwK-1 AsyncScheduler: instantiating at concurrency 0 Registering (mktemp) temporary file /tmp/duplicity-ZvvXnx- tempdir/mktemp-K4698Y-3 Selecting / Comparing . and None Getting delta of (. dir) and None A . Selection: examining path /.cache Selection: + no selection functions found. Including Selecting /.cache Comparing .cache and None Getting delta of (.cache dir) and None A .cache Selection: examining path /bin Selection: + no selection functions found. Including Selecting /bin Comparing bin and None Getting delta of (bin dir) and None A bin Selection: examining path /bin/bash Selection: + no selection functions found. Including [skipping to last 200 lines] Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181010 cell culture assay/NMuMg cells no microstructures_4_10x.tif Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181010 cell culture assay/NMuMg cells no microstructures_4_10x.tif Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181010 cell culture assay/NMuMg cells no microstructures_4_10x.tif and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181010 cell culture assay/NMuMg cells no microstructures_4_10x.tif reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181010 cell culture assay/NMuMg cells no microstructures_4_10x.tif Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol dir) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle dir) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_1 edges upright.tif Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_1 edges upright.tif Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_1 edges upright.tif and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_1 edges upright.tif reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_1 edges upright.tif Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_2 center.tif Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_2 center.tif Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_2 center.tif and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_2 center.tif reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_2 center.tif Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_3 center thick HG not all way to edges.tif Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_3 center thick HG not all way to edges.tif Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_3 center thick HG not all way to edges.tif and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_3 center thick HG not all way to edges.tif reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/BF_3 center thick HG not all way to edges.tif Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later dir) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 160mA close to min response power.stk Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 160mA close to min response power.stk Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 160mA close to min response power.stk and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 160mA close to min response power.stk reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 160mA close to min response power.stk AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity-full.20190207T200614Z.vol13.difftar.gz CMD: lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp-IGvjCS-1; mkdir -p ADMIN/SERVER_HOSTNAME/; put /tmp/duplicity-ZvvXnx-tempdir/mktemp- CpTjl1-15 -o ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol13.difftar.gz"" Reading results of 'lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp- IGvjCS-1; mkdir -p ADMIN/SERVER_HOSTNAME/; put /tmp/duplicity-ZvvXnx- tempdir/mktemp-CpTjl1-15 -o ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol13.difftar.gz""' STATUS: 0 STDERR: ---- Resolving host address... ---- 1 address found: 192.168.1.64 ---- Connecting to 192.168.1.64 (192.168.1.64) port 21 <--- 220 ProFTPD 1.3.5 Server (ADMIN1) [::ffff:192.168.1.64] ---> FEAT <--- 211-Features: <--- MFF modify;UNIX.group;UNIX.mode; <--- REST STREAM <--- MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; <--- UTF8 <--- LANG en-US* <--- EPRT <--- EPSV <--- MDTM <--- TVFS <--- MFMT <--- SIZE <--- 211 End ---> LANG <--- 500 Unable to handle command ---> OPTS UTF8 ON <--- 200 UTF8 set to on initialized translation from ANSI_X3.4-1968 to UTF-8 initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---> OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner <--- 200 OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode; ---> USER USERNAME <--- 331 Password required for USERNAME ---> PASS PASSWORD <--- 230 User USERNAME logged in ---> PWD <--- 257 ""/"" is the current directory ---> MKD ADMIN <--- 550 ADMIN: Permission denied ---> MKD ADMIN/SERVER_HOSTNAME <--- 550 ADMIN/SERVER_HOSTNAME: File exists ---> MKD ADMIN/SERVER_HOSTNAME/ <--- 550 ADMIN/SERVER_HOSTNAME/: File exists mkdir: Access failed: 550 ADMIN/SERVER_HOSTNAME/: File exists ---> TYPE I <--- 200 Type set to I ---> EPSV <--- 229 Entering Extended Passive Mode (|||32061|) ---- Connecting data socket to (192.168.1.64) port 32061 ---- Data connection established ---> ALLO 209705982 <--- 202 No storage allocation necessary ---> STOR ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol13.difftar.gz <--- 150 Opening BINARY mode data connection for ADMIN/SERVER_HOSTNAME/duplicity-full.20190207T200614Z.vol13.difftar.gz ---- Closing data socket <--- 226 Transfer complete ---> MFMT 20190207201021 ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol13.difftar.gz <--- 213 Modify=20190207201021; ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol13.difftar.gz ---> QUIT <--- 221 Goodbye. ---- Closing control socket STDOUT: Deleting /tmp/duplicity-ZvvXnx-tempdir/mktemp-CpTjl1-15 Forgetting temporary file /tmp/duplicity-ZvvXnx-tempdir/mktemp-CpTjl1-15 AsyncScheduler: task completed successfully Processed volume 13 Registering (mktemp) temporary file /tmp/duplicity-ZvvXnx- tempdir/mktemp-9sLuuf-16 Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA.stk Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA.stk Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA.stk and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA.stk reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA.stk AsyncScheduler: running task synchronously (asynchronicity disabled) Writing duplicity-full.20190207T200614Z.vol14.difftar.gz CMD: lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp-IGvjCS-1; mkdir -p ADMIN/SERVER_HOSTNAME/; put /tmp/duplicity-ZvvXnx- tempdir/mktemp-9sLuuf-16 -o ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol14.difftar.gz"" Reading results of 'lftp -c ""source /tmp/duplicity-ZvvXnx-tempdir/mkstemp- IGvjCS-1; mkdir -p ADMIN/SERVER_HOSTNAME/; put /tmp/duplicity-ZvvXnx- tempdir/mktemp-9sLuuf-16 -o ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol14.difftar.gz""' STATUS: 0 STDERR: ---- Resolving host address... ---- 1 address found: 192.168.1.64 ---- Connecting to 192.168.1.64 (192.168.1.64) port 21 <--- 220 ProFTPD 1.3.5 Server (ADMIN1) [::ffff:192.168.1.64] ---> FEAT <--- 211-Features: <--- MFF modify;UNIX.group;UNIX.mode; <--- REST STREAM <--- MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; <--- UTF8 <--- LANG en-US* <--- EPRT <--- EPSV <--- MDTM <--- TVFS <--- MFMT <--- SIZE <--- 211 End ---> LANG <--- 500 Unable to handle command ---> OPTS UTF8 ON <--- 200 UTF8 set to on initialized translation from ANSI_X3.4-1968 to UTF-8 initialized translation from UTF-8 to ANSI_X3.4-1968//TRANSLIT ---> OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner <--- 200 OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode; ---> USER USERNAME <--- 331 Password required for USERNAME ---> PASS PASSWORD <--- 230 User USERNAME logged in ---> PWD <--- 257 ""/"" is the current directory ---> MKD ADMIN <--- 550 ADMIN: Permission denied ---> MKD ADMIN/SERVER_HOSTNAME <--- 550 ADMIN/SERVER_HOSTNAME: File exists ---> MKD ADMIN/SERVER_HOSTNAME/ <--- 550 ADMIN/SERVER_HOSTNAME/: File exists mkdir: Access failed: 550 ADMIN/SERVER_HOSTNAME/: File exists ---> TYPE I <--- 200 Type set to I ---> EPSV <--- 229 Entering Extended Passive Mode (|||60948|) ---- Connecting data socket to (192.168.1.64) port 60948 ---- Data connection established ---> ALLO 209677605 <--- 202 No storage allocation necessary ---> STOR ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol14.difftar.gz <--- 150 Opening BINARY mode data connection for ADMIN/SERVER_HOSTNAME/duplicity-full.20190207T200614Z.vol14.difftar.gz ---- Closing data socket <--- 226 Transfer complete ---> MFMT 20190207201048 ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol14.difftar.gz <--- 213 Modify=20190207201048; ADMIN/SERVER_HOSTNAME/duplicity- full.20190207T200614Z.vol14.difftar.gz ---> QUIT <--- 221 Goodbye. ---- Closing control socket STDOUT: Deleting /tmp/duplicity-ZvvXnx-tempdir/mktemp-9sLuuf-16 Forgetting temporary file /tmp/duplicity-ZvvXnx-tempdir/mktemp-9sLuuf-16 AsyncScheduler: task completed successfully Processed volume 14 Registering (mktemp) temporary file /tmp/duplicity-ZvvXnx-tempdir/mktemp- FriMcv-17 Selection: examining path /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA_2.stk Selection: + no selection functions found. Including Selecting /home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA_2.stk Comparing home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA_2.stk and None Getting delta of (home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA_2.stk reg) and None A home/datavol1/working/SOMEUSER/hairs/Amy S/181016 HAIRS ON vac ON H2O low HG vol/1 0.8ul bendy circular pattern all bent in middle/actuation 1hr later/Stream 220mA_2.stk Segmentation fault root@SERVER_HOSTNAME:/home/USERNAME# ```",6 118019415,2019-02-05 15:50:47.709,Stream archive to backend while it is being created (lp:#1814792),"[Original report](https://bugs.launchpad.net/bugs/1814792) created by **Giovanni Mascellani (giomasce)** ``` It currently seems that archives generated by duplicity are first encrypted with GPG and stored in the temporary file, then read again and sent to the backend. I wonder if it is possible, at least for backends that support such operation, to directly stream the archive to the backend while it is being created and encrypted, without having to wait for the full encryption to finish. This would have several advantages, it seems: first, there would be less total IO, because there would be no need for temporary files; second, IO and CPU usage would be levelled by network connection (which is supposedly slower), instead of spiking while the disk is read and gpg is executed; third, network (again supposed to be the bottleneck factor) would be better used, because it would not have to wait for the archive generation and encryption. More in general, the total running time would be the maximum between the archive generation and uploading time, instead of their sum, and, as I said, resources would be more levelled. Thanks for duplicity, it is really a nice tool! ``` Original tags: wishlist",6 118019414,2019-01-14 06:51:00.458,Feature Request: More Informative Notice Level (lp:#1811643),"[Original report](https://bugs.launchpad.net/bugs/1811643) created by **Fedaykin (fedaykin7c2)** ``` It would be much more practical if Notice Level (4) would contain additional pieces of information about backup in progress like: Args collection-status Processed volume and also upload speed for volume. ``` Original tags: feature",6 118022093,2019-01-04 11:55:21.133,Rename manifest file (lp:#1810511),"[Original report](https://bugs.launchpad.net/bugs/1810511) created by **Anthony (0othan)** ``` Hello all, Is this possible to rename manifest file? Indeed, the current manifest naming is duplicity- inc.XXXXZ.to.YYYYZ.manifest. The issue with S3 is that we cannot ignore manifest when we move to glacier due to convention naming.. Indeed, we can add a filter by object (by example for signature: PATH/duplicity-new-signatures | PATH/duplicity-full-signatures) but regex are not used: we cannot use PATH/*.manifest to filter manifest file so if we use PATH/duplicity-full, manifest & volumes will be excluded during move to glacier.. The idea is to ignore manifest/signature file from glacier policy to avoid to wait 2/3h in the case that we need to check backups on other desktop by example... For now, we can add a filter only for signature... Best regards, ```",6 118022091,2018-12-27 02:30:49.797,duplicity remove-all-but-n-full Fails with TypeError: an integer is required (lp:#1809851),"[Original report](https://bugs.launchpad.net/bugs/1809851) created by **Eric Koski (ekoski1)** ``` duplicity 0.7.17 Python 2.7.15rc1 Ubuntu 18.04LTS ext4 filesystem command-line output: $ duplicity remove-all-but-n-full 1 --force file:///media/eric/Lnx_bk0 Synchronizing remote metadata to local cache... GnuPG passphrase for decryption: Copying duplicity-full-signatures.20171002T012817Z.sigtar.gpg to local cache. Copying duplicity-full-signatures.20180102T070445Z.sigtar.gpg to local cache. Copying duplicity-full-signatures.20180405T080021Z.sigtar.gpg to local cache. Cleanup of temporary directory /tmp/duplicity-j3izoa-tempdir failed - this is probably a bug. Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1393, in main do_backup(action) File ""/usr/bin/duplicity"", line 1414, in do_backup sync_archive() File ""/usr/bin/duplicity"", line 1204, in sync_archive copy_to_local(fn) File ""/usr/bin/duplicity"", line 1154, in copy_to_local tdp.move(globals.archive_dir.append(loc_name)) File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 643, in move self.copy(new_path) File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 463, in copy self.copy_attribs(other) File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 470, in copy_attribs util.maybe_ignore_errors(lambda: os.chmod(other.name, self.mode)) File ""/usr/lib/python2.7/dist-packages/duplicity/util.py"", line 92, in maybe_ignore_errors return fn() File ""/usr/lib/python2.7/dist-packages/duplicity/path.py"", line 470, in util.maybe_ignore_errors(lambda: os.chmod(other.name, self.mode)) TypeError: an integer is required ```",6 118022080,2018-12-03 19:05:37.011,duplicity won't restore backup due to unknown error (lp:#1806466),"[Original report](https://bugs.launchpad.net/bugs/1806466) created by **mariana xavier (marianaxrp)** ``` Hello, Last week I did a fresh install on my Ubuntu but now I can't restore the backup I had made. I get the following error message: Traceback (innermost last): File ""/usr/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1477, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 733, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 769, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 1 lsb_release -d Description: Ubuntu 18.04.1 LTS dpkg-query -W deja-dup duplicity deja-dup 37.1-2fakesync1 duplicity 0.7.18.2+bzr1367-0ubuntu1~ubuntu18.04.1 ```",6 118022077,2018-10-26 12:16:30.268,Backup fails with “zlib inflate problem: invalid stored block lengths” (lp:#1800139),"[Original report](https://bugs.launchpad.net/bugs/1800139) created by **rbuick (robert-buick)** ``` It fails backup with: Backup Failed GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: WARNING: ""--no-use-agent"" is an obsolete option - it has no effect gpg: AES256 encrypted data gpg: encrypted with 1 passphrase gpg: Fatal: zlib inflate problem: invalid stored block lengths ===== End GnuPG log ===== ----- Attempting to restore returns Restore Failed GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: WARNING: ""--no-use-agent"" is an obsolete option - it has no effect gpg: AES256 encrypted data gpg: encrypted with 1 passphrase gpg: Fatal: zlib inflate problem: invalid stored block lengths ===== End GnuPG log ===== ------ lsb_release -d Description: Ubuntu 18.04.1 LTS deja-dup 37.1-2fakesync1 duplicity 0.7.17-0ubuntu1 ------ deja-dup.gsettings: org.gnome.DejaDup last-restore '' org.gnome.DejaDup periodic true org.gnome.DejaDup periodic-period 1 org.gnome.DejaDup full-backup-period 90 org.gnome.DejaDup backend 'remote' org.gnome.DejaDup last-run '2018-10-22T23:01:17.255583Z' org.gnome.DejaDup nag-check '2018-10-08T14:09:09.699233Z' org.gnome.DejaDup prompt-check 'disabled' org.gnome.DejaDup root-prompt true org.gnome.DejaDup include-list ['$HOME'] org.gnome.DejaDup exclude-list ['$TRASH', '$DOWNLOAD'] org.gnome.DejaDup last-backup '2018-10-22T23:01:17.255583Z' org.gnome.DejaDup allow-metered false org.gnome.DejaDup delete-after 0 org.gnome.DejaDup.Rackspace username '' org.gnome.DejaDup.Rackspace container 'my-linux-pc' org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder 'my-linux-pc' org.gnome.DejaDup.OpenStack authurl '' org.gnome.DejaDup.OpenStack tenant '' org.gnome.DejaDup.OpenStack username '' org.gnome.DejaDup.OpenStack container 'my-linux-pc' org.gnome.DejaDup.GCS id '' org.gnome.DejaDup.GCS bucket '' org.gnome.DejaDup.GCS folder 'my-linux-pc' org.gnome.DejaDup.Local folder 'my-linux-pc' org.gnome.DejaDup.Remote uri 'smb://backup_nas/backup/' org.gnome.DejaDup.Remote folder 'MacBook_linux' org.gnome.DejaDup.Drive uuid '' org.gnome.DejaDup.Drive icon '' org.gnome.DejaDup.Drive folder 'my-linux-pc' org.gnome.DejaDup.Drive name '' org.gnome.DejaDup.GOA id '' org.gnome.DejaDup.GOA folder 'my-linux-pc' org.gnome.DejaDup.GOA type '' org.gnome.DejaDup.File short-name '' org.gnome.DejaDup.File type 'normal' org.gnome.DejaDup.File migrated true org.gnome.DejaDup.File name '' org.gnome.DejaDup.File path 'smb:///home/me_as_a_user/deja-dup' org.gnome.DejaDup.File uuid '' org.gnome.DejaDup.File icon '' org.gnome.DejaDup.File relpath@ ay [] -------- DEJA_DUP_DEBUG=1 deja-dup --backup | tail -n 1000 > /tmp/deja-dup.log empty --------- DEJA_DUP_DEBUG=1 deja-dup --restore | tail -n 1000 > /tmp/deja-dup.log empty ```",14 118022074,2018-10-02 10:56:33.062,Add option to specify the DropBox upload chunk size (lp:#1795621),"[Original report](https://bugs.launchpad.net/bugs/1795621) created by **Pedro Gimeno (pgimeno)** ``` The current dpbxbackend.py code includes the following snippet: # This is chunk size for upload using Dpbx chumked API v2. It doesn't # make sense to make it much large since Dpbx SDK uses connection pool # internally. So multiple chunks will sent using same keep-alive socket # Plus in case of network problems we most likely will be able to retry # only failed chunk DPBX_UPLOAD_CHUNK_SIZE = 16 * 1024 * 1024 However, DropBox has a limit of 25,000 API calls per month per team. With that chunk size, this means a maximum of about 390 GiB (419 GB) per month. That turned out to be insufficient for us, and we spent all of our API call quota for this month very quickly for this reason (it's October 2, therefore the quota won't be reset for a whole month). See https://www.dropbox.com/developers/reference/data-transport-limit for more information about the limits and the best practices related to division of files into chunks during uploads in Dropbox. The upload chunk size should be changeable without editing the sources (a command-line option would be ideal), and a notice should be added in the man page stating this problem. The suggested command line option is: --dropbox-upload-chunk-size The suggested text for the man page is: For the option: --dropbox-upload-chunk-size Defines the size in MB of each upload chunk for DropBox. Each chunk implies one API call, and the number of API calls may be limited. Defaults to 16 MB. See also A NOTE ON DROPBOX ACCESS. For A NOTE ON DROPBOX ACCESS, add: 4. Note that there may be a limit to the number of API calls that a Dropbox application may use per month. As of this writing, most account types have a limit of 25,000 API calls per month. Files are uploaded in chunks, and each chunk upload effects an API call, therefore you may need to adjust the option --dropbox-upload-chunk-size appropriately according to the size of your backup. I can try to come up with a patch, but not being familiar with the code or the language, I can't guarantee it will be of sufficient quality. ```",6 118022071,2018-09-27 15:22:54.096,dup_threading.py fails to traceback: AttributeError: 'exceptions.TypeError' object has no attribute 'with_traceback' (lp:#1794819),"[Original report](https://bugs.launchpad.net/bugs/1794819) created by **Marijn Vriens (marijnvriens)** ``` When running duplicity-backup to webdav it fails, and then the traceback fails to generate in dup_threading.py. This happens after uploading the first file correctly to webdav, and creating the second file to upload. Here's the beginning and end of the session: ================================================================================ duplicity 0.7.18 (August 21, 2018) Args: /usr/local/bin/duplicity --verbosity i --asynchronous-upload --full- if-older-than 3M --encrypt-key 0xD58004C930983622 --sign-key 0xD371AC4B4CC1A2AB /home/marijn --exclude-filelist /home/marijn/.duplicity/list.txt webdavs://something/something Linux muis2 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 x86_64 x86_64 /usr/bin/python2 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ================================================================================ ... A Some/file/A AsyncScheduler: scheduling task for asynchronous execution Processed volume 1 Writing duplicity-full.20180927T144546Z.vol1.difftar.gpg WebDAV PUT /remote.php/webdav/muis2/duplicity- full.20180927T144546Z.vol1.difftar.gpg request with headers: {'Connection': 'keep-alive', 'Authorization': 'Someauthcreds'} WebDAV data length: 209920753 A Some/file/B A Some/file/C AsyncScheduler: scheduling task for asynchronous execution WebDAV response status 201 with reason 'Created'. AsyncScheduler: task execution done (success: False) AsyncScheduler: a previously scheduled task has failed; propagating the result immediately Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1567, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1553, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1405, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1523, in do_backup full_backup(col_stats) File ""/usr/local/bin/duplicity"", line 584, in full_backup globals.backend) File ""/usr/local/bin/duplicity"", line 466, in write_multivol (tdp, dest_filename, vol_num))) File ""/usr/local/lib/python2.7/dist- packages/duplicity/asyncscheduler.py"", line 152, in schedule_task return self.__run_asynchronously(fn, params) File ""/usr/local/lib/python2.7/dist- packages/duplicity/asyncscheduler.py"", line 216, in __run_asynchronously with_lock(self.__cv, wait_for_and_register_launch) File ""/usr/local/lib/python2.7/dist-packages/duplicity/dup_threading.py"", line 105, in with_lock return fn() File ""/usr/local/lib/python2.7/dist- packages/duplicity/asyncscheduler.py"", line 208, in wait_for_and_register_launch check_pending_failure() # raise on fail File ""/usr/local/lib/python2.7/dist- packages/duplicity/asyncscheduler.py"", line 192, in check_pending_failure self.__failed_waiter() File ""/usr/local/lib/python2.7/dist-packages/duplicity/dup_threading.py"", line 201, in waiter raise state['error'].with_traceback(state['trace']) AttributeError: 'exceptions.TypeError' object has no attribute 'with_traceback' ```",6 118022069,2018-09-27 14:44:11.408,"Backup fails: ""secret key not available"" with key available. (lp:#1794808)","[Original report](https://bugs.launchpad.net/bugs/1794808) created by **Marijn Vriens (marijnvriens)** ``` Hi, I have the issue that when I have a full-backup that I break-off half-way for whatever reason, trying to restart/redo the backup later fails: ================================================================================ duplicity 0.7.18 (August 21, 2018) Args: /usr/local/bin/duplicity --verbosity i --asynchronous-upload --full- if-older-than 3M --encrypt-key 0xD58004C930983622 --sign-key 0xD371AC4B4CC1A2AB /home/marijn --exclude-filelist /home/marijn/.duplicity/list.txt webdavs://something/something Linux muis2 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 x86_64 x86_64 /usr/bin/python2 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ================================================================================ ... WebDAV data length: 4 WebDAV response status 200 with reason 'OK'. GPG error detail: Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1567, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1553, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1405, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1426, in do_backup sync_archive() File ""/usr/local/bin/duplicity"", line 1216, in sync_archive copy_to_local(fn) File ""/usr/local/bin/duplicity"", line 1164, in copy_to_local gpg.GzipWriteFile(src_iter, tdp.name, size=sys.maxsize) File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 447, in GzipWriteFile new_block = block_iter.next() File ""/usr/local/bin/duplicity"", line 1144, in next self.fileobj.close() File ""/usr/local/lib/python2.7/dist-packages/duplicity/dup_temp.py"", line 227, in close assert not self.fileobj.close() File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 305, in close self.gpg_failed() File ""/usr/local/lib/python2.7/dist-packages/duplicity/gpg.py"", line 272, in gpg_failed raise GPGError(msg) GPGError: GPG Failed, see log below: ==== Begin GnuPG log ===== gpg: encrypted with 4096-bit RSA key, ID 0xCC15A5FC57E98B61, created 2018-06-15 ""Marijn P. Vriens "" gpg: decryption failed: secret key not available ===== End GnuPG log ===== Which is weird in various ways. 1) The secret key is, in fact, available on the machine. 2) It's reporting the ID of a subkey (see below) not the ID of the master key. 3) This key is used to --encrypt-key, so no private key should be necessary, in the first place. $ gpg --list-secret-keys /home/marijn/.gnupg/pubring.gpg ------------------------------- sec# rsa4096/0xD58004C930983622 2015-11-14 [C] [expires: 2019-12-07] uid [ultimate] Marijn P. Vriens ssb rsa4096/0x952F274190FC721C 2018-06-15 [S] [expires: 2018-12-12] ssb rsa4096/0xCC15A5FC57E98B61 2018-06-15 [E] [expires: 2018-12-12] ssb rsa4096/0x9184AE674604012D 2018-06-15 [A] [expires: 2018-12-12] sec rsa4096/0xD371AC4B4CC1A2AB 2016-07-18 [SC] uid [ultimate] muis machine ssb rsa4096/0x56F8EEA2EC17E17B 2016-07-18 [E] Removing the files that where created on the backup location by the broken- off process works sometimes. Other times it keeps failing with this message, even after removing all created files. I have no idea what's the difference, but it leaves me with no choice but to use a different directory to back-up into. I'm suspecting that Duplicity is having issues with the fact that I'm using GPG subkeys, and fails to select the correct-secret key because of that. but i'm not certain of that. ```",6 118022065,2018-09-16 10:28:13.276,InvalidGrantError at every use of authenticated onedrive oauth (lp:#1792785),"[Original report](https://bugs.launchpad.net/bugs/1792785) created by **Jakob Böttger (jswiss)** ``` duplicity 0.7.18.1 Python 2.7.13 Debian 9.3 oauthlib (2.1.0) requests-oauthlib (1.0.0) request (1.0.2) If you authenticate onedrive via oauth the .json is saved and the running instance of duplicity is working. At the next use duplicity is always failing with: Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1560, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1546, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1385, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1135, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1010, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/onedrivebackend.py"", line 90, in __init__ self.initialize_oauth2_session() File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/onedrivebackend.py"", line 127, in initialize_oauth2_session self.http_client.refresh_token(self.OAUTH_TOKEN_URI) File ""/usr/local/lib/python2.7/dist- packages/requests_oauthlib/oauth2_session.py"", line 309, in refresh_token self.token = self._client.parse_request_body_response(r.text, scope=self.scope) File ""/usr/local/lib/python2.7/dist- packages/oauthlib/oauth2/rfc6749/clients/base.py"", line 411, in parse_request_body_response self.token = parse_token_response(body, scope=scope) File ""/usr/local/lib/python2.7/dist- packages/oauthlib/oauth2/rfc6749/parameters.py"", line 379, in parse_token_response validate_token_parameters(params) File ""/usr/local/lib/python2.7/dist- packages/oauthlib/oauth2/rfc6749/parameters.py"", line 386, in validate_token_parameters raise_from_error(params.get('error'), params) File ""/usr/local/lib/python2.7/dist- packages/oauthlib/oauth2/rfc6749/errors.py"", line 415, in raise_from_error raise cls(**kwargs) InvalidGrantError: (invalid_grant) The user could not be authenticated or the grant is expired. The user must first sign in and if needed grant the client application access to the requested scope. ``` Original tags: oauth onedrive",6 118022061,2018-09-01 01:00:04.032,B2 application key support (lp:#1790248),"[Original report](https://bugs.launchpad.net/bugs/1790248) created by **Tobias Sachs (sachstobia)** ``` Hey! B2 recently added support for separate application keys [1] just for one bucket. If I use them I get the following error with 0.7.18: Folgendes Archivverzeichnis wird benutzt: /root/.cache/duplicity/f78e085eb292fbcdee4358116ce069f5 Folgender Sicherungsname wird benutzt: f78e085eb292fbcdee4358116ce069f5 GPG binary is gpg, version 2.1.18 Import von duplicity.backends.acdclibackend Succeeded Import von duplicity.backends.azurebackend Succeeded Import von duplicity.backends.b2backend Succeeded Import von duplicity.backends.botobackend Succeeded Import von duplicity.backends.cfbackend Succeeded Import von duplicity.backends.dpbxbackend Failed: No module named dropbox Import von duplicity.backends.gdocsbackend Succeeded Import von duplicity.backends.giobackend Succeeded Import von duplicity.backends.hsibackend Succeeded Import von duplicity.backends.hubicbackend Succeeded Import von duplicity.backends.imapbackend Succeeded Import von duplicity.backends.lftpbackend Succeeded Import von duplicity.backends.localbackend Succeeded Import von duplicity.backends.mediafirebackend Succeeded Import von duplicity.backends.megabackend Succeeded Import von duplicity.backends.multibackend Succeeded Import von duplicity.backends.ncftpbackend Succeeded Import von duplicity.backends.onedrivebackend Succeeded Import von duplicity.backends.par2backend Succeeded Import von duplicity.backends.pydrivebackend Succeeded Import von duplicity.backends.rsyncbackend Succeeded Import von duplicity.backends.ssh_paramiko_backend Succeeded Import von duplicity.backends.ssh_pexpect_backend Succeeded Import von duplicity.backends.swiftbackend Succeeded Import von duplicity.backends.sxbackend Succeeded Import von duplicity.backends.tahoebackend Succeeded Import von duplicity.backends.webdavbackend Succeeded B2 Backend (path= backups/, bucket= XYZ, minimum_part_size= 100000000) Bucket found Hauptaufgabe: cleanup Acquiring lockfile /root/.cache/duplicity/f78e085eb292fbcdee4358116ce069f5/lockfile ================================================================================ duplicity 0.7.18 (August 21, 2018) Args: /usr/local/bin/duplicity --verbosity debug cleanup --force --sign-key XXX --encrypt-key XXX b2://XXX:XXX@XYZ/backups Linux Pandora 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 /usr/bin/python2 2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516] ================================================================================ Entfernte Metadaten werden zum lokalen Puffer synchronisiert … duplicity-full-signatures.20180304T185359Z.sigtar.gpg wird zum lokalen Puffer kopiert. Temporäres Verzeichnis /tmp/duplicity-Gx_hpr-tempdir wird benutzt (mktemp) temporäre Datei /tmp/duplicity-Gx_hpr-tempdir/mktemp-Sy_6xV-1 wird registriert Get: backups/duplicity-full-signatures.20180304T185359Z.sigtar.gpg -> /tmp/duplicity-Gx_hpr-tempdir/mktemp-Sy_6xV-1 Rückverfolgung des vorangegangenen Fehlers: Traceback (innermost last): File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 369, in inner_retry return fn(self, *args) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 552, in get self.backend._get(remote_filename, local_path) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/b2backend.py"", line 113, in _get b2.download_dest.DownloadDestLocalFile(local_path.name)) File ""/usr/local/lib/python2.7/dist-packages/logfury/v0_1/trace_call.py"", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File ""/usr/local/lib/python2.7/dist-packages/b2/bucket.py"", line 168, in download_file_by_name url, download_dest, progress_listener, range_ File ""/usr/local/lib/python2.7/dist-packages/logfury/v0_1/trace_call.py"", line 84, in wrapper return function(*wrapee_args, **wrapee_kwargs) File ""/usr/local/lib/python2.7/dist-packages/b2/transferer.py"", line 45, in download_file_from_url range_=range_, File ""/usr/local/lib/python2.7/dist-packages/b2/session.py"", line 39, in wrapper return f(api_url, account_auth_token, *args, **kwargs) File ""/usr/local/lib/python2.7/dist-packages/b2/raw_api.py"", line 248, in download_file_from_url return self.b2_http.get_content(url, request_headers) File ""/usr/local/lib/python2.7/dist-packages/b2/b2http.py"", line 337, in get_content response = _translate_and_retry(do_get, try_count, None) File ""/usr/local/lib/python2.7/dist-packages/b2/b2http.py"", line 119, in _translate_and_retry return _translate_errors(fcn, post_params) File ""/usr/local/lib/python2.7/dist-packages/b2/b2http.py"", line 55, in _translate_errors int(error['status']), error['code'], error['message'], post_params FileNotPresent: File not present: Versuch 1 fehlgeschlagen. FileNotPresent: File not present: ^CSperrdatei »/root/.cache/duplicity/f78e085eb292fbcdee4358116ce069f5/lockfile« wird freigegeben Bereits gemerkte temporäre Datei /tmp/duplicity-Gx_hpr-tempdir/mktemp- Sy_6xV-1 wird entfernt INT abgefangen … wird beendet. Sperrdatei »/root/.cache/duplicity/f78e085eb292fbcdee4358116ce069f5/lockfile« wird freigegeben Sperrdatei »/root/.cache/duplicity/f78e085eb292fbcdee4358116ce069f5/lockfile« wird freigegeben Python 2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516] on linux2 Distributor ID: Debian Description: Debian GNU/Linux 9.5 (stretch) Release: 9.5 Codename: stretch #pip show b2 Name: b2 Version: 1.3.6 [1] https://www.backblaze.com/b2/docs/application_keys.html ```",16 118022058,2018-08-27 04:33:24.527,Progress ETA is very wrong if upload is restarted. (lp:#1789153),"[Original report](https://bugs.launchpad.net/bugs/1789153) created by **Jeff Johnson (jeffjohnson0)** ``` When aborting an upload and the next invocation requires a 'RESTART', the ETA is very wrong and never seems to correct it self. In the log below, it seems that the first 23% is done in 0 seconds, which skews the ETA. Every 30 seconds or so it will add a minute. This upload should take ~18 hours. It would be nice if the RESTART didn't include the already completed percentage. RESTART: Volumes 99 to 100 failed to upload before termination. Restarting backup at volume 99. Restarting after volume 98, file XXXXXXXXXX, block 9016 0.0KB 00:00:00 [0.0KB/s] [=========> ] 23% ETA 0sec 0.0KB 00:00:10 [0.0KB/s] [=========> ] 23% ETA < 45sec 8.7MB 00:00:20 [267.9KB/s] [=========> ] 23% ETA 1min 18.5MB 00:00:30 [488.0KB/s] [=========> ] 23% ETA 1min 30sec 28.0MB 00:00:40 [631.5KB/s] [=========> ] 23% ETA 2min 36.4MB 00:00:50 [702.1KB/s] [=========> ] 23% ETA 2min 30sec 46.0MB 00:01:00 [787.0KB/s] [=========> ] 23% ETA 3min 55.7MB 00:01:10 [846.5KB/s] [=========> ] 23% ETA 3min 30sec 65.6MB 00:01:20 [897.5KB/s] [=========> ] 23% ETA 4min 75.1MB 00:01:30 [919.9KB/s] [=========> ] 23% ETA 4min 30sec 84.8MB 00:01:40 [940.0KB/s] [=========> ] 23% ETA 5min 30sec 94.2MB 00:01:50 [949.1KB/s] [=========> ] 23% ETA 6min 104.0MB 00:02:00 [962.7KB/s] [=========> ] 23% ETA 6min 113.1MB 00:02:10 [954.2KB/s] [=========> ] 23% ETA 7min 123.3MB 00:02:20 [980.1KB/s] [=========> ] 23% ETA 7min 133.0MB 00:02:30 [983.1KB/s] [=========> ] 23% ETA 8min 142.6MB 00:02:40 [985.5KB/s] [=========> ] 23% ETA 8min 152.4MB 00:02:50 [989.2KB/s] [=========> ] 23% ETA 9min 161.9MB 00:03:00 [983.6KB/s] [=========> ] 23% ETA 9min 169.1MB 00:03:10 [911.3KB/s] [=========> ] 23% ETA 10min 178.8MB 00:03:20 [935.7KB/s] [=========> ] 23% ETA 11min 188.2MB 00:03:30 [943.7KB/s] [=========> ] 23% ETA 11min 197.2MB 00:03:40 [936.4KB/s] [=========> ] 23% ETA 12min 203.0MB 00:03:50 [832.0KB/s] [=========> ] 23% ETA 12min 212.8MB 00:04:00 [885.2KB/s] [=========> ] 23% ETA 13min 222.9MB 00:04:10 [928.2KB/s] [=========> ] 23% ETA 13min 232.0MB 00:04:20 [930.3KB/s] [=========> ] 23% ETA 14min 241.6MB 00:04:30 [944.2KB/s] [=========> ] 23% ETA 14min 251.5MB 00:04:40 [964.4KB/s] [=========> ] 23% ETA 15min 260.8MB 00:04:50 [961.1KB/s] [=========> ] 23% ETA 15min 271.3MB 00:05:00 [995.2KB/s] [=========> ] 23% ETA 16min 280.1MB 00:05:10 [966.6KB/s] [=========> ] 23% ETA 17min 289.5MB 00:05:20 [966.5KB/s] [=========> ] 23% ETA 17min 298.5MB 00:05:30 [951.0KB/s] [=========> ] 23% ETA 18min 307.8MB 00:05:40 [951.7KB/s] [=========> ] 23% ETA 18min 317.3MB 00:05:50 [959.1KB/s] [=========> ] 23% ETA 19min 326.8MB 00:06:00 [961.5KB/s] [=========> ] 23% ETA 19min 336.5MB 00:06:10 [972.5KB/s] [=========> ] 23% ETA 20min 346.5MB 00:06:20 [987.2KB/s] [=========> ] 23% ETA 20min 355.5MB 00:06:30 [965.8KB/s] [=========> ] 23% ETA 21min 365.0MB 00:06:40 [967.1KB/s] [=========> ] 23% ETA 22min 379.0MB 00:06:50 [1.1MB/s] [=========> ] 23% ETA 22min 392.7MB 00:07:00 [1.2MB/s] [=========> ] 23% ETA 23min 403.1MB 00:07:10 [1.1MB/s] [=========> ] 23% ETA 23min 417.1MB 00:07:20 [1.2MB/s] [=========> ] 23% ETA 24min 430.8MB 00:07:30 [1.3MB/s] [=========> ] 23% ETA 24min 444.7MB 00:07:40 [1.3MB/s] [=========> ] 23% ETA 25min 458.6MB 00:07:50 [1.3MB/s] [=========> ] 23% ETA 25min Ubuntu is version 16.04. Python is version 2.7.12. Duplicity is version 0.7.17. b2 backend is 1.3.4. ```",6 118022056,2018-06-16 15:38:06.367,onedrive authentication fails (lp:#1777256),"[Original report](https://bugs.launchpad.net/bugs/1777256) created by **Louis Kirsch (louiskirsch)** ``` Trying to run a command with onedrive for the first time, e.g. `duplicity list-current-files onedrive://backups` triggers authentication correctly: ``` Could not load OAuth2 token. Trying to create a new one. In order to authorize duplicity to access your OneDrive, please open XXX in a browser and copy the URL of the blank page the dialog leads to. ``` and then after submitting the URL of the blank page crashes with ``` Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1380, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/duplicity/commandline.py"", line 1127, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/duplicity/backends/onedrivebackend.py"", line 90, in __init__ self.initialize_oauth2_session() File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/duplicity/backends/onedrivebackend.py"", line 157, in initialize_oauth2_session authorization_response=redirected_to) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/requests_oauthlib/oauth2_session.py"", line 244, in fetch_token self._client.parse_request_body_response(r.text, scope=self.scope) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/oauthlib/oauth2/rfc6749/clients/base.py"", line 409, in parse_request_body_response self.token = parse_token_response(body, scope=scope) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/oauthlib/oauth2/rfc6749/parameters.py"", line 376, in parse_token_response validate_token_parameters(params) File ""/usr/local/Cellar/duplicity/0.7.17/libexec/lib/python2.7/site- packages/oauthlib/oauth2/rfc6749/parameters.py"", line 406, in validate_token_parameters raise w Warning: Scope has changed from ""wl.skydrive_update wl.offline_access wl.skydrive"" to ""onedrive.readwrite wl.skydrive_update wl.offline_access wl.signin wl.skydrive"". ``` Duplicity version: 0.7.17 Python version 2.7 OS Distro and version OSX Type of target filesystem: OneDrive ```",6 118022052,2018-06-13 17:59:47.587,Backup chain corrupted due to scp failure (lp:#1776733),"[Original report](https://bugs.launchpad.net/bugs/1776733) created by **Bouke (bouke-haarsma)** ``` Today I've been trying to salvage some lost data using our duplicity backups. However during the restoration I've run into the following error a few times: Invalid data - SHA1 hash mismatch for file: duplicity-inc.20180601T033002Z.to.20180601T073003Z.vol16.difftar.gpg Digging through the backup volumes, that specific increment caught my attention: 200M Jun 1 09:38 duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol15.difftar.gpg 21M Jun 1 09:38 duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol16.difftar.gpg 201M Jun 1 10:27 duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol19.difftar.gpg Notably: - volume 16 is incomplete - volume 17 and 18 are missing In the logs when creating that backup, the following relevant messages have been outputted: Giving up after 5 attempts. BackendException: Error running 'scp -oServerAliveInterval=15 -oServerAliveCountMax=2 /tmp/duplicity- VA2x4D-tempdir/mktemp-fqSlsK-18 offsite- backup.cb.local:/backup/dbs02r2.cb.local/duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol16.difftar.gpg' Giving up after 5 attempts. BackendException: Error running 'scp -oServerAliveInterval=15 -oServerAliveCountMax=2 /tmp/duplicity- VA2x4D-tempdir/mktemp-RbVN9m-19 offsite- backup.cb.local:/backup/dbs02r2.cb.local/duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol17.difftar.gpg' Giving up after 5 attempts. BackendException: Error running 'scp -oServerAliveInterval=15 -oServerAliveCountMax=2 /tmp/duplicity- VA2x4D-tempdir/mktemp-SEdTr8-20 offsite- backup.cb.local:/backup/dbs02r2.cb.local/duplicity- inc.20180601T033002Z.to.20180601T073003Z.vol18.difftar.gpg' For reference, the command used to produce this backup is: /usr/bin/duplicity --full-if-older-than 1M --encrypt-key XXXXXXXX --max-blocksize 1048576 --gpg-options -z 1 --asynchronous-upload -v 8 --include /my/data --exclude ** /my pexpect+scp://offsite-backup//backup Duplicity version: 0.7.15 Python version: 2.7.5 OS: CentOS 7.3 EL7 (centos-release-7-3.1611.el7.centos.x86_64) Filesystem: XFS Looking through the backup logs, I see quite a few of those ""giving up"" messages. Probably those increments are corrupted as well. I'm starting to lose my confidence in the backups produced by duplicity. I expect that when an increment hasn't fully completed, duplicity would not continue to use that increment for the backup chain. Am I doing something horribly wrong, or is this expected behaviour from duplicity? ```",6 118022683,2018-04-16 21:21:12.927,"Backup failed, failed due to an unknown error (lp:#1764534)","[Original report](https://bugs.launchpad.net/bugs/1764534) created by **Antoine (tirocco)** ``` Hello, Since this week, I have a problem with deja-dup, backup software, which worked very well for about 2 years, with a weekly adjustment. I have not changed anything yet about my system. I'm running with Xubuntu 16.04 installed since September 2016. message: The backup failed, failed due to an unknown error. and I have a whole long list posted here : Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1532, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1526, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1380, in main do_backup(action) File ""/usr/bin/duplicity"", line 1405, in do_backup globals.archive_dir).set_values() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 835, in get_backup_chains add_to_sets(f) File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 823, in add_to_sets if set.add_filename(filename): File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20161222T105139Z.vol1.difftar.gz', 2: 'duplicity-full.20161222T105139Z.vol2.difftar.gz', 3: 'duplicity- full.20161222T105139Z.vol3.difftar.gz', 4: 'duplicity- full.20161222T105139Z.vol4.difftar.gz', 5: 'duplicity- full.20161222T105139Z.vol5.difftar.gz', 6: 'duplicity- full.20161222T105139Z.vol6.difftar.gz', 7: 'duplicity- full.20161222T105139Z.vol7.difftar.gz', 8: 'duplicity- full.20161222T105139Z.vol8.difftar.gz', 9: 'duplicity- full.20161222T105139Z.vol9.difftar.gz', 10: 'duplicity- full.20161222T105139Z.vol10.difftar.gz', 11: 'duplicity- full.20161222T105139Z.vol11.difftar.gz', 12: 'duplicity- full.20161222T105139Z.vol12.difftar.gz', 13: 'duplicity- full.20161222T105139Z.vol13.difftar.gz', 14: 'duplicity- full.20161222T105139Z.vol14.difftar.gz', 15: 'duplicity- full.20161222T105139Z.vol15.difftar.gz', 16: 'duplicity- full.20161222T105139Z.vol16.difftar.gz', 17: 'duplicity- full.20161222T105139Z.vol17.difftar.gz', 18: 'duplicity- full.20161222T105139Z.vol18.difftar.gz', 19: 'duplicity- full.20161222T105139Z.vol19.difftar.gz', 20: 'duplicity- full.20161222T105139Z.vol20.difftar.gz', 21: 'duplicity- full.20161222T105139Z.vol21.difftar.gz', 22: 'duplicity- full.20161222T105139Z.vol22.difftar.gz', 23: 'duplicity- full.20161222T105139Z.vol23.difftar.gz', 24: 'duplicity- full.20161222T105139Z.vol24.difftar.gz', 25: 'duplicity- full.20161222T105139Z.vol25.difftar.gz', 26: 'duplicity- full.20161222T105139Z.vol26.difftar.gz', 27: 'duplicity- full.20161222T105139Z.vol27.difftar.gz', 28: 'duplicity- full.20161222T105139Z.vol28.difftar.gz', 29: 'duplicity- full.20161222T105139Z.vol29.difftar.gz', 30: 'duplicity- full.20161222T105139Z.vol30.difftar.gz', 31: 'duplicity- full.20161222T105139Z.vol31.difftar.gz', 32: 'duplicity- full.20161222T105139Z.vol32.difftar.gz', 33: 'duplicity- full.20161222T105139Z.vol33.difftar.gz', 34: 'duplicity- full.20161222T105139Z.vol34.difftar.gz', 35: 'duplicity- full.20161222T105139Z.vol35.difftar.gz', 36: 'duplicity- full.20161222T105139Z.vol36.difftar.gz', 37: 'duplicity- full.20161222T105139Z.vol37.difftar.gz', 38: 'duplicity- full.20161222T105139Z.vol38.difftar.gz', 39: 'duplicity- full.20161222T105139Z.vol39.difftar.gz', 40: 'duplicity- full.20161222T105139Z.vol40.difftar.gz', 41: 'duplicity- full.20161222T105139Z.vol41.difftar.gz', 42: 'duplicity- full.20161222T105139Z.vol42.difftar.gz', 43: 'duplicity- full.20161222T105139Z.vol43.difftar.gz', 44: 'duplicity- full.20161222T105139Z.vol44.difftar.gz', 46: 'duplicity- full.20161222T105139Z.vol46.difftar.gz', 47: 'duplicity- full.20161222T105139Z.vol47.difftar.gz', 48: 'duplicity- full.20161222T105139Z.vol48.difftar.gz', 52: 'duplicity- full.20161222T105139Z.vol52.difftar.gz'}, 'duplicity- full.20161222T105139Z.vol5.difftar.gz') I tried to 'restore', I also deleted the last backup, but it did not change anything. I also uninstalled and reinstalled, but nothing helps. Everything is ok in the update. I do not understand the reason ... Thank you for your answers if you see where is the problem. ```",6 118022047,2018-03-21 18:48:23.455,Not working with OneDrive/SharePoint for businesses (lp:#1757515),"[Original report](https://bugs.launchpad.net/bugs/1757515) created by **Fedaykin (fedaykin7c2)** ``` Duplicity (all versions) do not supports OneDrive for businesses. My Univeristy uses the SAML-based authentication, which Duplicity lacks. This authentication requires a set of cookies to be passed. The basic idea is that the browser should open the authentication site, which will ask for credentials, such as username, password. Once the user is authenticated a ticket will be generated and set as a cookie and a redirect is issued to the original site, which now grants access based on the cookie. ```",6 118022038,2018-03-18 17:45:01.160,multi backend multiple oPort for sftp fails (lp:#1756718),"[Original report](https://bugs.launchpad.net/bugs/1756718) created by **madimadi (ubuntu-madi)** ``` When multibackend used for multiple sftp connection, sftp port value taken multiple times into the sftp command for every SFTP instance configured in JSON file. Actual Example: INFO 1 . Running 'sftp -oIdentityFile=/id_rsa -oTCPKeepAlive=yes -oPort=17032 -oServerAliveInterval=15 -oServerAliveCountMax=2 -oPort=11224 XXX@DOMAIM.COM ..... INFO 1 . Running 'sftp -oIdentityFile=/id_rsa -oTCPKeepAlive=yes -oPort=17032 -oServerAliveInterval=15 -oServerAliveCountMax=2 -oPort=11224 YYY@OTHERDOMAIM.COM So sftp will use only the first oPort value. As a result there's no way to use individual tcp ports for each SFTP connection in multi backend. The same for identity file. No support for providing multiple ssh identity. duplicity 0.7.13.1 ```",6 118022033,2018-02-26 19:46:47.247,Support named AWS profiles (lp:#1751881),"[Original report](https://bugs.launchpad.net/bugs/1751881) created by **John W. Lamb (jolamb)** ``` The AWS CLI config file format allows specifying multiple sets of account credentials in the same file by placing them in unique INI sections: The Boto library supports use of these named profiles via the boto3.session.Session(..., profile_name="""") named parameter: There doesn't seem to be a means of providing the profile name for s3/s3+http backup/restore in duplicity, but there should be. I would prefer this capability versus various environment variable based options. ```",6 118022031,2018-02-17 17:29:51.342,cannot import name Gio (lp:#1750168),"[Original report](https://bugs.launchpad.net/bugs/1750168) created by **Dan Jaenecke (wonk042)** ``` I am using duplicity via deja-dup to create backups; but recently it stopped working, stating ""BackendException: Hintergrundprogramm konnte nicht initialisiert werden: cannot import name Gio"" (sorry for the german error messages, unfortunately I failed to switch to english messages). The command causing this error (reduced to the minimum): duplicity --verbosity=9 --gio collection-status file:///tmp Achtung: Parameter --gio ist veraltet und wird in einer der nächsten Versionen entfernt. Bitte verwenden Sie Standarddateinamen. Folgendes Archivverzeichnis wird benutzt: $HOME/.cache/duplicity/c2731c0788339744944161fd8afb74dd Folgender Sicherungsname wird benutzt: c2731c0788339744944161fd8afb74dd Import von duplicity.backends.azurebackend Succeeded Import von duplicity.backends.b2backend Succeeded Import von duplicity.backends.botobackend Succeeded Import von duplicity.backends.cfbackend Succeeded Import von duplicity.backends.copycombackend Succeeded Import von duplicity.backends.dpbxbackend Succeeded Import von duplicity.backends.gdocsbackend Succeeded Import von duplicity.backends.giobackend Succeeded Import von duplicity.backends.hsibackend Succeeded Import von duplicity.backends.hubicbackend Succeeded Import von duplicity.backends.imapbackend Succeeded Import von duplicity.backends.lftpbackend Succeeded Import von duplicity.backends.localbackend Succeeded Import von duplicity.backends.megabackend Succeeded Import von duplicity.backends.multibackend Succeeded Import von duplicity.backends.ncftpbackend Succeeded Import von duplicity.backends.onedrivebackend Succeeded Import von duplicity.backends.par2backend Succeeded Import von duplicity.backends.pydrivebackend Succeeded Import von duplicity.backends.rsyncbackend Succeeded Import von duplicity.backends.ssh_paramiko_backend Succeeded Import von duplicity.backends.ssh_pexpect_backend Succeeded Import von duplicity.backends.swiftbackend Succeeded Import von duplicity.backends.sxbackend Succeeded Import von duplicity.backends.tahoebackend Succeeded Import von duplicity.backends.webdavbackend Succeeded Temporäres Verzeichnis /tmp/duplicity-EguFuP-tempdir wird benutzt Fehlerdetail des Hintergrundprogramms: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1532, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1526, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1364, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1108, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 211, in get_backend_object raise BackendException(_(""Could not initialize backend: %s"") % str(sys.exc_info()[1])) BackendException: Hintergrundprogramm konnte nicht initialisiert werden: cannot import name Gio Removing --gio from the command fixes the problem, but since the command is created by deja-dup I have no idea how to get rid of that option. A possible solutions I found was that the package python-gi might be missing; it was already installed, and even a $ apt install --reinstall python-gi did not help. Also as suggested in Question #664503 I tried installing python-gobject-2 / python-gobject-2-dev, which did not help either My OS: Ubuntu 16.04.3 LTS Version of duplicity: 0.7.06 Python 2.7.12 ```",6 118022026,2018-01-28 17:54:44.450,Faulty OK with failing mega backend (lp:#1745856),"[Original report](https://bugs.launchpad.net/bugs/1745856) created by **Michael (michaelsweden)** ``` Duplicity 0.7.16 Megatools 1.9.97 Xubuntu 16.04.3 I have found some problem while using the mega backend. 1) Not possible to specify user (mega://user.name%40gmail.com@mega.nz/Folder) and FTP_PASSWORD, it always try to use the .megarc file. 2) Folder at cloud not automatic created. 3) OK even if mega backend commands fails! MAJOR 4) This is written in megabackend.py: """"""Connect to remote store using Mega.co.nz API"""""" However it isn't true. The megatools are used. 5) If no files found in backend -> ""Local and Remote metadata are synchronized, no sync needed."" How can it be in sync if no files found at one end. Example: "" $ PASSPHRASE=""abc"" duplicity --include '/home/user/folder/**' --exclude '**' /home/user mega://mega.nz/Tmp2 mkdir: Tmp2 megals: /Root/Tmp2 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Tmp2 Last full backup date: none No signatures found, switching to full backup. megarm: duplicity-full.20180127T123518Z.vol1.difftar.gpg megaput: duplicity-full.20180127T123518Z.vol1.difftar.gpg megarm: duplicity-full-signatures.20180127T123518Z.sigtar.gpg megaput: duplicity-full-signatures.20180127T123518Z.sigtar.gpg megarm: duplicity-full.20180127T123518Z.manifest.gpg megaput: duplicity-full.20180127T123518Z.manifest.gpg megals: /Root/Tmp2 --------------[ Backup Statistics ]-------------- StartTime 1517056520.00 (Sat Jan 27 13:35:19 2018) EndTime 1517056520.57 (Sat Jan 27 13:35:20 2018) ElapsedTime 0.57 (0.57 seconds) SourceFiles 676 SourceFileSize 11869399 (11.3 MB) NewFiles 676 NewFileSize 11869399 (11.3 MB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 676 RawDeltaSize 11599063 (11.1 MB) TotalDestinationSizeChange 3335818 (3.18 MB) Errors 0 ------------------------------------------------- $ echo $? 0 "" Note that all seems to be okay above. However no files have been uploaded to the cloud, none! It turns out that the megatools is bad. However bugs have been fixed since the version in Ubuntu (1.9.97). This version of the tools always return 0 (OK). Latest source code doesn't. So the biggest problem is the old and bad megatools. However I think duplicity should include some kind of check for this major fault. I notice duplicity do a _list in backend after put up files. Maybe it could be possible to use this answer (list of files) and check that the files that was just uploaded actually is in the list. Here is a proposal for one quick check that can be added to discover when megatools fails. Check that megals have got an answer from the server. The ""/Root"" folder should always be there or the folder that is given as argument, if it exist. I have done some changes in megabackend.py "" --- duplicity/backends/megabackend.716 2018-01-27 14:02:02.675823111 +0100 +++ duplicity/backends/megabackend.py 2018-01-27 17:51:07.203823111 +0100 @ @ -28,7 +28,7@ @ class MegaBackend(duplicity.backend.Backend): - """"""Connect to remote store using Mega.co.nz API"""""" + """"""Connect to MEGA cloud (mega.nz) using Megatools"""""" def __init__(self, parsed_url): duplicity.backend.Backend.__init__(self, parsed_url) @ @ -128,6 +128,10@ @ files = subprocess.check_output(cmd) files = files.strip().split('\n') + # ensure communication with server + if self._folder not in files: + raise BackendException(""folder '%s' not found or communication error!"" % (self._folder,)) + # remove the folder name, including the path separator files = [f[len(self._folder) + 1:] for f in files] @ @ -161,7 +165,12@ @ cmd = ['megaput', '-u', self._username, '-p', self._password, '--no-progress', '--path', self._folder + '/' + remote_file, local_file] - self.subprocess_popen(cmd) + result = subprocess.check_output(cmd, stderr=subprocess.STDOUT) + result = result.rstrip() + if result: + print (result) + if ""ERROR"" in result: + raise BackendException(""error from megaput!"") def delete(self, remote_file): "" Note: It can be discussed if check for ""ERROR"" string should be done or check for not empty result string. I don't know if all kinds of faults print ERROR. I have done some tests, here are printouts from failed executions (with my patch)... megals 1.9.97 fails (this is the same typ of execution as above): "" mkdir: Test22 megals: /Root/Test22 Attempt 1 failed. BackendException: folder '/Root/Test22' not found or communication error! "" megaput 1.9.97 fails: "" mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup left a partial set, restarting. Last full backup date: Sat Jan 27 17:50:51 2018 RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup date: none No signatures found, switching to full backup. megarm: duplicity-full.20180128T122438Z.vol1.difftar.gpg megaput: duplicity-full.20180128T122438Z.vol1.difftar.gpg ERROR: Upload failed for '/tmp/duplicity-UppYZF-tempdir/mktemp-a2MlX5-2': Parent directory doesn't exist: /Root/Test22 Attempt 1 failed. BackendException: error from megaput! "" megals 1.9.98 fails: "" mkdir: Test22 megals: /Root/Test22 Attempt 1 failed. BackendException: folder '/Root/Test22' not found or communication error! "" megaput 1.9.98 fails: "" mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup left a partial set, restarting. Last full backup date: Sun Jan 28 13:24:38 2018 RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup date: none No signatures found, switching to full backup. megarm: duplicity-full.20180128T130311Z.vol1.difftar.gpg megaput: duplicity-full.20180128T130311Z.vol1.difftar.gpg Attempt 1 failed. CalledProcessError: Command '['megaput', '--config', '/home/surf/.megarc', '--no-progress', '--path', '/Root/Test22/duplicity- full.20180128T130311Z.vol1.difftar.gpg', '/tmp/duplicity-94eemj- tempdir/mktemp-7Vui_I-2']' returned non-zero exit status 1 "" I think both old and new megatools should be supported. However maybe it can depend on which versions will be included in apt repos. Now I test with duplicity 0.7.16 (without my patch) and newer megatools 1.9.98: "" mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup left a partial set, restarting. Last full backup date: Sun Jan 28 14:03:11 2018 RESTART: The first volume failed to upload before termination. Restart is impossible...starting backup from beginning. mkdir: Test22 megals: /Root/Test22 Local and Remote metadata are synchronized, no sync needed. megals: /Root/Test22 Last full backup date: none No signatures found, switching to full backup. megarm: duplicity-full.20180128T130740Z.vol1.difftar.gpg megaput: duplicity-full.20180128T130740Z.vol1.difftar.gpg Attempt 1 failed. BackendException: Error running 'megaput --config /home/surf/.megarc --no-progress --path /Root/Test22/duplicity- full.20180128T130740Z.vol1.difftar.gpg /tmp/duplicity-tn9xZW- tempdir/mktemp-eXq1NT-2': returned 1, with output: ERROR: Upload failed for '/tmp/duplicity-tn9xZW-tempdir/mktemp-eXq1NT-2': Parent directory doesn't exist: /Root/Test22 "" So here we actually get an error, however not on megals. PS. Please excuse me, I'm a C/C++ developer and have never written python code. So I don't feel comfortable doing this. I hope some experianced python developer can check this out. Sorry, but I haven't made proposal to fix all problems I listed at top. //Michael ```",6 118022019,2018-01-26 10:25:04.358,Patch: add --compression-level option (lp:#1745582),"[Original report](https://bugs.launchpad.net/bugs/1745582) created by **B. Reitsma (breitsma)** ``` duplicity-0.7.16 I've created a small patch to make the compression level configurable. $ diffstat -p1 compression_level.diff duplicity/commandline.py | 3 +++ duplicity/globals.py | 2 ++ duplicity/gpg.py | 2 +- 3 files changed, 6 insertions(+), 1 deletion(-) ```",6 118022016,2018-01-18 13:16:51.765,Unable to upload data in existing bucket (lp:#1744061),"[Original report](https://bugs.launchpad.net/bugs/1744061) created by **Mislav (mislavorsolic)** ``` Duplicity version: 0.7.16 Python version: 2.7.13 OS Distro and version: Debian 9 Type of target filesystem: Linux, Windows, other.: AWS S3 Log output from -v9 option - Include the command line, the first 200 lines of the log, and the last 200 lines of the log. Using archive dir: /root/.cache/duplicity/5a698764f55901efaa11eacfb96afe00 Using backup name: 5a698764f55901efaa11eacfb96afe00 GPG binary is gpg, version 2.1.18 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Using temporary directory /tmp/duplicity-63GCQx-tempdir Traceback (innermost last): File ""/usr/local/bin/duplicity"", line 1559, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1545, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1381, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1140, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1015, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/local/lib/python2.7/dist-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 166, in __init__ self.resetConnection() File ""/usr/local/lib/python2.7/dist- packages/duplicity/backends/_boto_single.py"", line 191, in resetConnection location=self.my_location) File ""/usr/lib/python2.7/dist-packages/boto/s3/connection.py"", line 620, in create_bucket response.status, response.reason, body) S3CreateError: S3CreateError: 409 Conflict BucketAlreadyOwnedByYouYour previous request to create the named bucket succeeded and you already own it.test-backup- storage6B34D6B947EB3E1C0W9aKSim3k6b19SJCOObYZVDQ51RjzOH5ElIDV64kw3dGm/jvuQ6R8NVbPFyh4agSu6VL3lsmP8= Why is duplicity trying to create bucket? It already exists. It's located in AWS S3 - EU (Frankurt). My line is: duplicity --s3-european-buckets --s3-use-new-style /PATH/TO/UPLOAD/ s3+http://MY-BUCKET-NAME -v9 Am I doing something wrong? ```",6 118022014,2018-01-09 10:17:48.588,VerifyHostKeyDNS not used (lp:#1742105),"[Original report](https://bugs.launchpad.net/bugs/1742105) created by **ybovard (ybovard)** ``` I try to use the SSHFP resource records. Everything works well with ssh in commandline but duplicity does not seem to take VerifyHostKeyDNS into account: # ls -l /root/.ssh/known* -rw-r--r--. 1 root root 792 Jun 22 2017 /root/.ssh/known_hosts-old # cat /etc/ssh/ssh_config ... VerifyHostKeyDNS yes ... # ssh user@myserver.mydomain.mytld Last login: Tue Jan 9 10:43:03 2018 from xyz $ # /bin/duplicity incremental --encrypt-key AAAA --exclude /tmp / scp://user@myserver.mydomain.mytld//backup/comal.novalocal Using archive dir: /root/.cache/duplicity/0ae77733cf532410912be8ca70d1f956 Using backup name: 0ae77733cf532410912be8ca70d1f956 GPG binary is gpg, version 2.0.22 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded ssh: starting thread (client mode): 0x1d83e90L ssh: Local version/idstring: SSH-2.0-paramiko_1.16.1 ssh: Remote version/idstring: SSH-2.0-OpenSSH_7.4 ssh: Connected (version 2.0, client OpenSSH_7.4) ssh: kex algos:[u'curve25519-sha256', u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group16-sha512', u'diffie-hellman-group18-sha512', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman-group14-sha256', u'diffie-hellman-group14-sha1', u'diffie- hellman-group1-sha1'] server key:[u'ssh-rsa', u'rsa-sha2-512', u'rsa- sha2-256', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'chacha20-poly1305@openssh.com', u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'aes128-cbc', u'aes192-cbc', u'aes256-cbc', u'blowfish-cbc', u'cast128-cbc', u'3des-cbc'] server encrypt:[u'chacha20-poly1305@openssh.com', u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'aes128-cbc', u'aes192-cbc', u'aes256-cbc', u'blowfish-cbc', u'cast128-cbc', u'3des-cbc'] client mac:[u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac- sha2-512-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac- sha2-512', u'hmac-sha1'] server mac:[u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac- sha2-512-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac- sha2-512', u'hmac-sha1'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False ssh: Kex agreed: diffie-hellman-group1-sha1 ssh: Cipher agreed: aes128-ctr ssh: MAC agreed: hmac-sha2-256 ssh: Compression agreed: none ssh: kex engine KexGroup1 specified hash_algo ssh: Switch to new keys ... The authenticity of host 'myserver.mydomain.mytld' can't be established. SSH-RSA key fingerprint is zzzz. Are you sure you want to continue connecting (yes/no)? I am using duplicity 0.7.15 on CentOS Linux 7, with Python 2.7.5 ```",6 118022422,2018-01-05 11:12:05.987,Giving up after 5 attempts. Error: g-io-error-quark: Error splicing file: Input/output error (0) (lp:#1741458),"[Original report](https://bugs.launchpad.net/bugs/1741458) created by **Marek Wasenczuk (markwasenczuk)** ``` Giving up after 5 attempts. Error: g-io-error-quark: Error splicing file: Input/output error (0) **Distro Ubuntu 18.04.2 LTS **Versions deja-dup 37.1-2fakesync1ubuntu0.1 duplicity 0.7.17-0ubuntu1.1 **contents of ""/tmp/deja-dup.gsettings"" after running ""gsettings list- recursively org.gnome.DejaDup > /tmp/deja-dup.gsettings"" org.gnome.DejaDup last-restore '' org.gnome.DejaDup periodic true org.gnome.DejaDup periodic-period 7 org.gnome.DejaDup full-backup-period 90 org.gnome.DejaDup backend 'drive' org.gnome.DejaDup last-run '2019-03-14T01:31:20.860377Z' org.gnome.DejaDup nag-check '2019-01-31T01:36:20.855909Z' org.gnome.DejaDup prompt-check 'disabled' org.gnome.DejaDup root-prompt true org.gnome.DejaDup include-list ['$HOME'] org.gnome.DejaDup exclude-list ['$TRASH', '$DOWNLOAD', '/home/(deleted)/Videos', '/home/(deleted)/VirtualBox VMs'] org.gnome.DejaDup last-backup '2019-03-14T01:31:20.860377Z' org.gnome.DejaDup allow-metered false org.gnome.DejaDup delete-after 182 org.gnome.DejaDup.Rackspace username '' org.gnome.DejaDup.Rackspace container '(machine name)' org.gnome.DejaDup.S3 id '' org.gnome.DejaDup.S3 bucket '' org.gnome.DejaDup.S3 folder '(machine name)' org.gnome.DejaDup.OpenStack authurl '' org.gnome.DejaDup.OpenStack tenant '' org.gnome.DejaDup.OpenStack username '' org.gnome.DejaDup.OpenStack container '(machine name)' org.gnome.DejaDup.GCS id '' org.gnome.DejaDup.GCS bucket '' org.gnome.DejaDup.GCS folder '(machine name)' org.gnome.DejaDup.Local folder '(machine name)' org.gnome.DejaDup.Remote uri '' org.gnome.DejaDup.Remote folder '(machine name)' org.gnome.DejaDup.Drive uuid '06FC2D4F6BE45252' org.gnome.DejaDup.Drive icon '. GThemedIcon drive-harddisk-usb drive- harddisk drive' org.gnome.DejaDup.Drive folder 'backup' org.gnome.DejaDup.Drive name '250 GB Volume' org.gnome.DejaDup.GOA id '' org.gnome.DejaDup.GOA folder '(machine name)' org.gnome.DejaDup.GOA type '' org.gnome.DejaDup.File short-name '250 GB Volume' org.gnome.DejaDup.File type 'volume' org.gnome.DejaDup.File migrated true org.gnome.DejaDup.File name 'Toshiba USB 2.0 Ext. HDD: 250 GB Volume' org.gnome.DejaDup.File path 'file:///media/sunye/06FC2D4F6BE45252/backup' org.gnome.DejaDup.File uuid '06FC2D4F6BE45252' org.gnome.DejaDup.File icon '. GThemedIcon drive-harddisk-usb drive- harddisk drive' org.gnome.DejaDup.File relpath b'backup' **Running ""DEJA_DUP_DEBUG=1 deja-dup --backup | tail -n 1000 > /tmp/deja- dup.log"" starts the backing up, which terminates with a pop-up window in which it reads: Backup Failed Failed with an unknown error. Traceback (innermost last): File ""/usr/bin/duplicity"", line 1555, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1541, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1393, in main do_backup(action) File ""/usr/bin/duplicity"", line 1522, in do_backup check_last_manifest(col_stats) # not needed for full backup File ""/usr/bin/duplicity"", line 1227, in check_last_manifest last_backup_set.check_manifests() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 201, in check_manifests remote_manifest = self.get_remote_manifest() File ""/usr/lib/python2.7/dist-packages/duplicity/collections.py"", line 235, in get_remote_manifest manifest_buffer = self.backend.get_data(self.remote_manifest_name) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 684, in get_data fin = self.get_fileobj_read(filename, parseresults) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 676, in get_fileobj_read self.get(filename, tdp) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 395, in inner_retry % (n, e.__class__.__name__, util.uexc(e))) File ""/usr/lib/python2.7/dist-packages/duplicity/util.py"", line 79, in uexc return ufn(unicode(e).encode('utf-8')) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 58: ordinal not in range(128) **The contents of the file ""/tmp/deja-dup.log"" omitted because space limitations ```",44 118022011,2018-01-02 09:20:18.684,It should batch-unfreeze glacier S3 objects to speed up restorations (lp:#1740833),"[Original report](https://bugs.launchpad.net/bugs/1740833) created by **Yajo (yajo)** ``` Scenario: Duplicity is configured with S3 backend, S3 is configured to move objects to Glacier storage after 1 week, full backups are stored every Sunday, partial backups happen once each night, today is Friday, the user wants to restore a backup from last week's Wednesday. User executes `duplicity [options] restore s3://bucket/folder /local/folder --time 9D`. What happens: The backup takes about 2-4 days to restore. What should happen: The backup should take about 5-12h to restore. Why it happens: Duplicity unfreezes (move from Glacier to Standard S3 storage mode) archives automatically (cool) one by one (not cool). Unfreezing one archive can take usually 5-12h. Assuming you need to unfreeze files from 4 days to get the full chain, that gives you a 20-48h minimum wait, unless files are split into i.e. 200MB chunks, which can produce more objects to unfreeze, summing extra 5-12h each. Why it should not happen: If you go into S3 Management Console, you can manually select as many Glacier files as you want and unfreeze them all at once, having to wait those 5-12h, yes, but just once for all of them. Duplicity should do that automatically through Boto too. Thanks! ```",12 118022007,2017-11-11 14:18:09.392,KeyError: 56 (lp:#1731631),"[Original report](https://bugs.launchpad.net/bugs/1731631) created by **Kenneth Loafman (kenneth-loafman)** ``` Last run of Deja-dup has ended up with the following error: Traceback (innermost last): File ""/usr/bin/duplicity"", line 1559, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1545, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1394, in main do_backup(action) File ""/usr/bin/duplicity"", line 1473, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 729, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 558, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 521, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 107, in filter_path_iter for path in path_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 765, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 56 This is with 0.7.14+bzr1336-0ubuntu3~ubuntu17.10.1. ``` Original tags: artful",6 118022004,2017-11-02 20:24:57.043,Support for enterprise OneDrive (lp:#1729689),"[Original report](https://bugs.launchpad.net/bugs/1729689) created by **Cysioland (cysioland)** ``` Currently only ""mainstream"" OneDrive is supported, there's no native way to access sharepoint.com storage. Some info that may be useful on tackling that: https://micreabog.wordpress.com/2017/02/24/using-duplicity-with-microsoft- sharepointonedrive-for-business/ ``` Original tags: microsoft office365 onedrive sharepoint wishlist",12 118022350,2017-10-21 20:55:17.402,"Backup fails with ReadError(""raise ReadError(""unexpected end of data"") (lp:#1725829)","[Original report](https://bugs.launchpad.net/bugs/1725829) created by **Bharat Mediratta (bharat-menalto)** ``` Hey team. I've been using duplicity and duply for about a year now with few problems. I recently upgraded my Debian system (I can probably find out the exact upgrade details if you need to know) and duplicity started crashing. You can see the full trace below. As far as I can tell, it's attempting to convert an incomplete tarfile into a GPG version. I've paused the code at that point and verified that if I do a ""tar tzvf"" on the input tar file, it returns an EOF error. Note that I modified /usr/lib/python2.7/tarfile.py to add slightly more detail to the exception for debugging, but otherwise the code is unchanged. I'm going to continue to debug independently, but if you can point me at some things to try that would help me. Happy to run whatever diagnostics will help narrow this down. OS INFO ------- Linux fidelity 4.9.0-1-686-pae #1 SMP Debian 4.9.6-3 (2017-01-28) i686 GNU/Linux PACKAGE INFO ------------ ii duplicity 0.7.12-1 i386 encrypted bandwidth-efficient backup ii duply 1.11.3-1 all easy to use frontend to the duplicity backup system OUTPUT FROM A RUN ----------------- # TMPDIR='/tmp' PASSPHRASE='[ELIDED]' FTP_PASSWORD='[ELIDED]' trickle -s -u 1500 -d 5120 duplicity --archive-dir '/var/cache/duply' --name duply_backups --verbosity '9' --volsize 1024 --full-if-older-than 3M --asynchronous-upload --tempdir /var/cache/duply --exclude-filelist '/etc/duply/backups/exclude' '/' 's3://[ELIDED]' gpg: WARNING: unsafe ownership on homedir '/home/bharat/.gnupg' Using archive dir: /var/cache/duply/duply_backups Using backup name: duply_backups GPG binary is gpg, version 2.1.18 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Reading globbing filelist /etc/duply/backups/exclude Main action: inc ================================================================================ duplicity 0.7.12 (March 21, 2017) Args: /usr/bin/duplicity --archive-dir /var/cache/duply --name duply_backups --verbosity 9 --volsize 1024 --full-if-older-than 3M --asynchronous-upload --tempdir /var/cache/duply --exclude-filelist /etc/duply/backups/exclude / s3://AKIAIFBSC2GRGO7KR5DQ@s3-eu- central-1.amazonaws.com/menalto.backups/fidelity Linux fidelity 4.9.0-1-686-pae #1 SMP Debian 4.9.6-3 (2017-01-28) i686 /usr/bin/python 2.7.13 (default, Jan 19 2017, 14:48:08) [GCC 6.3.0 20170118] ================================================================================ Using temporary directory /var/cache/duply/duplicity-A4Cxl1-tempdir Registering (mkstemp) temporary file /var/cache/duply/duplicity-A4Cxl1-tempdir/mkstemp-tQT0KF-1 Temp has 172445667328 available, backup will use approx 2469606195. .... Comparing usr/lib/i386-linux-gnu/samba/libcliauth.so.0 and usr/lib/i386-linux-gnu/samba/libcliauth.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libcluster.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libcluster.so.0 Comparing usr/lib/i386-linux-gnu/samba/libcluster.so.0 and usr/lib/i386-linux-gnu/samba/libcluster.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libcmdline- credentials.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libcmdline-credentials.so.0 Comparing usr/lib/i386-linux-gnu/samba/libcmdline-credentials.so.0 and usr/lib/i386-linux-gnu/samba/libcmdline-credentials.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libcom_err- samba4.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0 Comparing usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0 and usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libcom_err- samba4.so.0.25 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0.25 Comparing usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0.25 and usr/lib/i386-linux-gnu/samba/libcom_err-samba4.so.0.25 Selection: examining path /usr/lib/i386-linux-gnu/samba/libdbwrap.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libdbwrap.so.0 Comparing usr/lib/i386-linux-gnu/samba/libdbwrap.so.0 and usr/lib/i386-linux-gnu/samba/libdbwrap.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libdcerpc- samba.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libdcerpc-samba.so.0 Comparing usr/lib/i386-linux-gnu/samba/libdcerpc-samba.so.0 and usr/lib/i386-linux-gnu/samba/libdcerpc-samba.so.0 Selection: examining path /usr/lib/i386-linux-gnu/samba/libdcerpc- samba4.so.0 Selection: result: None from function: Command-line exclude glob: /dev Selection: result: None from function: Command-line exclude glob: /tmp Selection: result: None from function: Command-line exclude glob: /proc Selection: result: None from function: Command-line exclude glob: /sys Selection: result: None from function: Command-line exclude glob: /var/lib/apt-xapian-index Selection: result: None from function: Command-line exclude glob: /var/lib/clamav/daily.cld Selection: result: None from function: Command-line exclude glob: /var/lib/mlocate Selection: result: None from function: Command-line exclude glob: /var/cache/apt Selection: result: None from function: Command-line exclude glob: /home/duply Selection: result: None from function: Command-line exclude glob: /home/virtualbox/unifi.vdi Selection: result: None from function: Command-line exclude glob: /home/virtualbox/VMs/unifi/Snapshots Selection: + including file Selecting /usr/lib/i386-linux-gnu/samba/libdcerpc-samba4.so.0 Releasing lockfile /var/cache/duply/duply_backups/lockfile.lock Removing still remembered temporary file /var/cache/duply/duplicity-A4Cxl1-tempdir/mktemp-_tMnye-4 Removing still remembered temporary file /var/cache/duply/duplicity-A4Cxl1-tempdir/mkstemp-tQT0KF-1 Removing still remembered temporary file /var/cache/duply/duplicity-A4Cxl1-tempdir/mktemp-t8lgqC-3 Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1553, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1547, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1398, in main do_backup(action) File ""/usr/bin/duplicity"", line 1529, in do_backup incremental_backup(sig_chain) File ""/usr/bin/duplicity"", line 678, in incremental_backup globals.backend) File ""/usr/bin/duplicity"", line 439, in write_multivol globals.volsize) File ""/usr/lib/python2.7/dist-packages/duplicity/gpg.py"", line 360, in GPGWriteFile data = block_iter.next().data File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 523, in next result = self.process(self.input_iter.next()) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 195, in get_delta_iter for new_path, sig_path in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 286, in collate2iters relem2 = riter2.next() File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 354, in combine_path_iters refresh_triple_list(triple_list) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 341, in refresh_triple_list new_triple = get_triple(old_triple[1]) File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 327, in get_triple path = path_iter_list[iter_index].next() File ""/usr/lib/python2.7/dist-packages/duplicity/diffdir.py"", line 239, in sigtar2path_iter for tarinfo in tf: File ""/usr/lib/python2.7/tarfile.py"", line 2511, in next tarinfo = self.tarfile.next() File ""/usr/lib/python2.7/tarfile.py"", line 2352, in next raise ReadError(""unexpected end of data in %s at offset %d"" % (self.name, self.offset - 1)) ReadError: unexpected end of data in /usr/lib/python2.7/dist- packages/duplicity/arbitrary at offset 2388667391 ```",18 118022003,2017-10-16 09:10:47.205,pyrax deprecated (lp:#1723894),"[Original report](https://bugs.launchpad.net/bugs/1723894) created by **Alexandre ZANNI (shark-oxi)** ``` Rackspace has deprecated pyrax (https://github.com/rackspace/pyrax/) in favor of openstacksdk (https://pypi.python.org/pypi/openstacksdk) and rackspacesdk (https://pypi.python.org/pypi/rackspacesdk). ref: ""python-cloudfiles deprecated"" https://bugs.launchpad.net/duplicity/+bug/1179322 ```",6 118021996,2017-10-09 10:02:37.060,Backup from LVM snapshot fail without error stats / exit code (lp:#1722203),"[Original report](https://bugs.launchpad.net/bugs/1722203) created by **Bouke (bouke-haarsma)** ``` We're using duplicity to create backups of an LVM volume by creating a snapshot. However we found that when a snapshot exceeds the allotted disk space and the volume is forcefully unmounted, that duplicity handles this rather unfortunately. Actual results: * Some errors will be logged on stdout * Exit code is 0 * Backup statistics shows 0 errors Expected results: * Errors will be logged on stderr * Exit code is != 0 * Backup statistics shows >0 errors Our setup / problem: * LVM disk with XFS file system * LVM snapshot of that disk, mounted * Duplicity backing up from the mounted snapshot * LVM snapshot might get invalidated before backup finishes if there's a lot of disk activity * LVM will force unmount the snapshot once it is invalidated Output: ```stdout $ duplicity ...; echo $? Local and Remote metadata are synchronized, no sync needed. Last full backup date: Wed Sep 27 16:01:06 2017 Error [Errno 5] Input/output error getting delta for /XXX- backup/data/local/collection-12--2763165517992512287.wt Error [Errno 5] Input/output error getting delta for /XXX- backup/data/local/collection-12--2763165517992512287.wt --------------[ Backup Statistics ]-------------- StartTime 1507540632.42 (Mon Oct 9 11:17:12 2017) EndTime 1507540788.59 (Mon Oct 9 11:19:48 2017) ElapsedTime 156.17 (2 minutes 36.17 seconds) SourceFiles 555 SourceFileSize 27970866720 (26.0 GB) NewFiles 35 NewFileSize 14705246318 (13.7 GB) DeletedFiles 0 ChangedFiles 45 ChangedFileSize 11136098320 (10.4 GB) ChangedDeltaSize 0 (0 bytes) DeltaEntries 80 RawDeltaSize 5603945441 (5.22 GB) TotalDestinationSizeChange 2668791306 (2.49 GB) Errors 0 ------------------------------------------------- 0 ``` Steps to reproduce: ``` $ lvcreate -l100M --name backup --snapshot VG/LV $ mount -o nouuid VG/backup /backup $ duplicity /backup ... & $ dd if=/dev/zero of=/backup/zeroes bs=1M count=101 ``` Duplicity: 0.7.13.1 Python: 2.7.5 OS: CentOS Linux release 7.3.1611 (Core) Filesystem: XFS / LVM snapshot ```",6 118021988,2017-10-05 12:28:09.594,I'm not sure if this is a error or my script mistake (lp:#1721535),"[Original report](https://bugs.launchpad.net/bugs/1721535) created by **Bruno (brunooliveira1)** ``` I try to make a backup using command syntax /usr/local/bin/duplicity full-if-older-than 1M / --encrypt-key=gpgkey --asynchronous-upload file:///backup Os metadados remotos e locais estão sincronizados; nenhuma sincronização é necessária. Data da última cópia de segurança completa: Thu Sep 14 21:05:58 2017 Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1548, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1534, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1383, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1515, in do_backup check_last_manifest(col_stats) # not needed for full backup File ""/usr/local/bin/duplicity"", line 1217, in check_last_manifest last_backup_set.check_manifests() File ""/usr/local/lib/python2.7/dist-packages/duplicity/collections.py"", line 208, in check_manifests remote_manifest = self.get_remote_manifest() File ""/usr/local/lib/python2.7/dist-packages/duplicity/collections.py"", line 244, in get_remote_manifest (self.remote_manifest_name, str(message))) UnicodeEncodeError: 'ascii' codec can't encode character u'\xfa' in position 169: ordinal not in range(128) ``` Original tags: backup incremental",6 118021964,2017-09-06 02:41:06.227,duplicity 0.7.14 occasionally looping when backup up to swift object store (lp:#1715280),"[Original report](https://bugs.launchpad.net/bugs/1715280) created by **Doug (caniwi)** ``` Running duplicity 0.7.14 on Ubuntu 14.04 with Python 2.7.6. From time to time, duplicity goes into a loop. This behaviour also occurred on duplicity 0.7.13. An strace on the task shows the following: dev-sugar-combined00:~# ps aux |grep dup root 15328 0.0 0.0 11760 2072 pts/1 S+ 14:05 0:00 grep dup root 24704 0.0 0.0 4460 796 ? Ss Sep01 0:00 /bin/sh -c /usr/local/bin/duplicity_backup.sh chensi7_mysql_sgr_chensi7 >> /var/log/backup/chensi7_mysql_sgr_chensi7.log 2>&1 root 24708 0.0 0.0 12476 2892 ? S Sep01 0:00 /bin/bash /usr/local/bin/duplicity_backup.sh chensi7_mysql_sgr_chensi7 root 24836 0.3 0.3 223624 30632 ? S Sep01 28:02 /usr/bin/python /usr/bin/duplicity --verbosity Notice --full-if-older-than 7D --num-retries 3 --asynchronous-upload --no-encryption --volsize 512 /var/opt/backups/sgr_chensi7 swift://dev-sugar-combined00.cloudtech.co.nz- chensi7_mysql_sgr_chensi7 dev-sugar-combined00:~# strace -p 24836 Process 24836 attached select(0, NULL, NULL, NULL, {0, 23735}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 36067}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 2000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 4000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 8000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 16000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 32000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 35575}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 2000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 4000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 8000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 16000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 32000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 35583}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) select(0, NULL, NULL, NULL, {0, 2000}) = 0 (Timeout) ```",6 118021952,2017-09-02 11:51:50.262,Document exit codes in man page (lp:#1714664),"[Original report](https://bugs.launchpad.net/bugs/1714664) created by **Maxxer (lorenzo-milesi)** ``` Please document duplicity exit codes in the man page ```",6 118021934,2017-09-02 11:17:24.502,Asks 2 or 4 times for the same GnuPG passphrase even if there is no passphrase (lp:#1714662),"[Original report](https://bugs.launchpad.net/bugs/1714662) created by **Valentin Stoykov (vstoykovbg)** ``` When I run duplicity with a sign key like this: $ duplicity --encrypt-sign-key B0ED3B59 data file:///tmp/backup1 it asks for a passphrase two times: GnuPG passphrase: GnuPG passphrase for signing key: There is only one passphrase for this key and this should be obvious for the duplicity. When I add `--use-agent` it asks 4 times for a passphrase using the agent: $ duplicity --encrypt-sign-key B0ED3B59 data file:///tmp/backup --use- agent I tested it with GnuPG key with no passphrase and it still asks for a passphrase (expected: duplicity should 'see' that there is no passphrase for the key and don't ask for it at all). Also, it asks 3 times for the same (empty string) passphrase when doing incremental backup: valentin@computer:~/tmp$ duplicity --encrypt-sign-key FC7C18370CED0054 data file:///tmp/backup Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GnuPG passphrase: GnuPG passphrase for signing key: No signatures found, switching to full backup. --------------[ Backup Statistics ]-------------- StartTime 1504350184.04 (Sat Sep 2 14:03:04 2017) EndTime 1504350184.19 (Sat Sep 2 14:03:04 2017) ElapsedTime 0.16 (0.16 seconds) SourceFiles 4 SourceFileSize 2202707 (2.10 MB) NewFiles 4 NewFileSize 2202707 (2.10 MB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 4 RawDeltaSize 2202702 (2.10 MB) TotalDestinationSizeChange 2207411 (2.11 MB) Errors 0 ------------------------------------------------- valentin@computer:~/tmp$ echo test > data/test.txt valentin@computer:~/tmp$ duplicity --encrypt-sign-key FC7C18370CED0054 data file:///tmp/backup Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sat Sep 2 14:03:01 2017 GnuPG passphrase: GnuPG passphrase for signing key: GnuPG passphrase: --------------[ Backup Statistics ]-------------- StartTime 1504350192.78 (Sat Sep 2 14:03:12 2017) EndTime 1504350192.79 (Sat Sep 2 14:03:12 2017) ElapsedTime 0.01 (0.01 seconds) SourceFiles 5 SourceFileSize 2202713 (2.10 MB) NewFiles 2 NewFileSize 11 (11 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 2 RawDeltaSize 5 (5 bytes) TotalDestinationSizeChange 1356 (1.32 KB) Errors 0 Software version: valentin@computer:~$ duplicity --version duplicity 0.7.14 valentin@computer:~$ cat /proc/version Linux version 4.4.0-93-generic (buildd@lgw01-03) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017 valentin@computer:~$ cat /etc/os-release NAME=""Ubuntu"" VERSION=""16.04.3 LTS (Xenial Xerus)"" ID=ubuntu ID_LIKE=debian PRETTY_NAME=""Ubuntu 16.04.3 LTS"" VERSION_ID=""16.04"" HOME_URL=""http://www.ubuntu.com/"" SUPPORT_URL=""http://help.ubuntu.com/"" BUG_REPORT_URL=""http://bugs.launchpad.net/ubuntu/"" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial valentin@computer:~$ python --version Python 2.7.12 valentin@computer:~$ ``` Original tags: gnupg passphrase",6 118021931,2017-08-27 21:19:55.649,Boto backend makes wrong assumption about Glacier class (lp:#1713369),"[Original report](https://bugs.launchpad.net/bugs/1713369) created by **Martin (martin3000)** ``` The code in _boto_single.py in function pre_process_download assumes that an S3 object must not be in storage class ""Glacier"" to be downloaded. This is not correct. S3 objects in Glacier can be downloaded after a restore has been triggered and when the retoration-expire-date is available. This means that there is a temporary copy which and be downloaded. Furthermore, pre_process_download is called first from sync_archive and later from copy_to_local, so it must be avoided to trigger the restore two times. My version looks like this: def pre_process_download(self, remote_filename, wait=False): # Used primarily to restore files in Glacier key_name = self.key_prefix + remote_filename if not self._listed_keys.get(key_name, False): self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] key = self._listed_keys[key_name] key2 = self.bucket.get_key(key.key) if key2.storage_class == ""GLACIER"": if not key2.expiry_date: # no temp copy avail if not key2.ongoing_restore: log.Info(""File %s is in Glacier storage, restoring"" % remote_filename) key.restore(days=2) # we need the copy for 2 days if wait: log.Info(""Waiting for file %s to restore in Glacier"" % remote_filename) while not self.bucket.get_key(key.key).expiry_date: time.sleep(60) self.resetConnection() log.Info(""File %s was successfully restored in Glacier"" % remote_filename) ``` Original tags: backend boto glacier restore",6 118021926,2017-08-27 20:53:16.363,Calling pre_process_download does not work (lp:#1713367),"[Original report](https://bugs.launchpad.net/bugs/1713367) created by **Martin (martin3000)** ``` Duplicity Version 0.7.06 on Linux In function sync_archive of duplicity at line 1185 I find the following code: if hasattr(globals.backend, 'pre_process_download'): globals.backend.pre_process_download(local_missing) for fn in local_missing: copy_to_local(fn) Before copy_to_local, some preprocessing should be made. In case of Amazon S3 (boto) this should restore objects from Glacier. But ""globals.backend"" point to the backend wrapper so the condition if hasattr(globals.backend, 'pre_process_download') is never true. It must be globals.backend.backend. When calling pre_process_download the argument is ""local_missing"" which is a list. But _boto_single.py expects a single filename as the argument for pre_process_download. I would change this to: if hasattr(globals.backend.backend, 'pre_process_download'): for fn in local_missing: globals.backend.backend.pre_process_download(fn,wait=False) # restore from S3 for fn in local_missing: copy_to_local(fn) Same issue in function restore_get_patched_rop_iter. ``` Original tags: boto glacier s3",6 118021924,2017-08-03 17:34:43.957,""""" for restore action is new name for restored file, and not the target folder (lp:#1708501)","[Original report](https://bugs.launchpad.net/bugs/1708501) created by **ardabro (ardabro)** ``` v: 7.11 > mkdir restore > ls -l total 0 drwxr-xr-x 1 user users 0 Aug 3 19:23 restore > > duplicity restore --file-to-restore=""user/file.txt"" file:///mnt/backup/duplicity ./restore/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: Thu Aug 3 00:28:09 2017 GnuPG passphrase for decryption: > > ls -l total 32 -rw-r--r-- 1 user users 20749 Jan 13 2016 restore It deletes target folder (""restore"") and renames extracted file to its name! There is no way to extract specific file with its original name to arbitrary location. ```",6 118019411,2017-08-02 21:00:36.566,AssertionError when one volume has been non-destructively decompressed (lp:#1708286),"[Original report](https://bugs.launchpad.net/bugs/1708286) created by **Tim Waugh (twaugh)** ``` When one volume of a backup repository has been decompressed (perhaps by an over-eager file manager application when a curious user has double-clicked on a compressed volume), leaving the original compressed file in place, duplicity raises an AssertionError when asked to operate on that collection. Example: $ cd /tmp $ mkdir source backup $ touch source/foo $ duplicity full --no-encryption /tmp/source file:///tmp/backup/ Local and Remote metadata are synchronized, no sync needed. Last full backup date: none --------------[ Backup Statistics ]-------------- StartTime 1501707218.27 (Wed Aug 2 21:53:38 2017) EndTime 1501707218.27 (Wed Aug 2 21:53:38 2017) ElapsedTime 0.00 (0.00 seconds) SourceFiles 2 SourceFileSize 60 (60 bytes) NewFiles 2 NewFileSize 60 (60 bytes) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 2 RawDeltaSize 0 (0 bytes) TotalDestinationSizeChange 127 (127 bytes) Errors 0 ------------------------------------------------- $ FILE=$(echo backup/duplicity-full.*.vol1.difftar.gz) $ gunzip -c ""$FILE"" > ""${FILE%.gz}"" $ ls backup | cat duplicity-full.20170802T205426Z.manifest duplicity-full.20170802T205426Z.vol1.difftar duplicity-full.20170802T205426Z.vol1.difftar.gz duplicity-full-signatures.20170802T205426Z.sigtar.gz $ duplicity collection-status file:///tmp/backup Synchronising remote metadata to local cache... Copying duplicity-full-signatures.20170802T205426Z.sigtar.gz to local cache. Copying duplicity-full.20170802T205426Z.manifest to local cache. Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1540, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1534, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1385, in main do_backup(action) File ""/usr/bin/duplicity"", line 1410, in do_backup globals.archive_dir).set_values() File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 836, in get_backup_chains add_to_sets(f) File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 824, in add_to_sets if set.add_filename(filename): File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20170802T205426Z.vol1.difftar'}, 'duplicity-full.20170802T205426Z.vol1.difftar.gz') $ rpm -q duplicity duplicity-0.7.13.1-1.fc26.x86_64 $ sed 1q /usr/bin/duplicity #!/usr/bin/env python2 $ type python2 python2 is /usr/bin/python2 $ rpm -qf /usr/bin/python2 python2-2.7.13-11.fc26.x86_64 Full -v9 output: Using archive dir: /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75 Using backup name: ba8d32ccb88d13597b4784252744fc75 GPG binary is gpg, version 1.4.22 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.dpbxbackend Succeeded Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Main action: collection-status Acquiring lockfile /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/lockfile ================================================================================ duplicity 0.7.13.1 (June 18, 2017) Args: /usr/bin/duplicity collection-status -v9 file:///tmp/backup Linux river 4.11.11-300.fc26.x86_64 #1 SMP Mon Jul 17 16:32:11 UTC 2017 x86_64 x86_64 /usr/bin/python2 2.7.13 (default, Jun 26 2017, 10:20:05) [GCC 7.1.1 20170622 (Red Hat 7.1.1-3)] ================================================================================ Local and Remote metadata are synchronized, no sync needed. 4 files exist on backend 3 files exist in cache Extracting backup chains from list of files: [u'duplicity- full.20170802T205426Z.vol1.difftar', u'duplicity- full.20170802T205426Z.manifest', u'duplicity-full- signatures.20170802T205426Z.sigtar.gz', u'duplicity- full.20170802T205426Z.vol1.difftar.gz'] File duplicity-full.20170802T205426Z.vol1.difftar is not part of a known set; creating new set Processing local manifest /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/duplicity- full.20170802T205426Z.manifest (181) Found manifest volume 1 Found 1 volumes in manifest File duplicity-full.20170802T205426Z.manifest is part of known set File duplicity-full-signatures.20170802T205426Z.sigtar.gz is not part of a known set; creating new set Ignoring file (rejected by backup set) 'duplicity-full- signatures.20170802T205426Z.sigtar.gz' Releasing lockfile /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/lockfile Using temporary directory /tmp/duplicity-57Q9j_-tempdir Releasing lockfile /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/lockfile Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1540, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1534, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1385, in main do_backup(action) File ""/usr/bin/duplicity"", line 1410, in do_backup globals.archive_dir).set_values() File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 710, in set_values self.get_backup_chains(partials + backend_filename_list) File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 836, in get_backup_chains add_to_sets(f) File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 824, in add_to_sets if set.add_filename(filename): File ""/usr/lib64/python2.7/site-packages/duplicity/collections.py"", line 105, in add_filename (self.volume_name_dict, filename) AssertionError: ({1: 'duplicity-full.20170802T205426Z.vol1.difftar'}, 'duplicity-full.20170802T205426Z.vol1.difftar.gz') Releasing lockfile /home/twaugh/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/lockfile ```",6 118021919,2017-07-30 01:10:22.010,Restore fails if backup contains filenames with invalid utf-8 encodings (lp:#1707461),"[Original report](https://bugs.launchpad.net/bugs/1707461) created by **Bernie Innocenti (codewiz)** ``` This is duplicity 0.7.13.1 trying to restore a backup created by the same version: Error '(u'Error creating directory /run/media/bernie/My Book/restore/public_html/projects/amiga/XModule/XModuleSrc/Catalogs/Fran\ufffdais', 7)' processing public_html/projects/amiga/XModule/XModuleSrc/Catalogs/Fran�ais Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1540, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1534, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1385, in main do_backup(action) File ""/usr/bin/duplicity"", line 1462, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 728, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 560, in Write_ROPaths ITR(ropath.index, ropath) File ""/usr/lib64/python2.7/site-packages/duplicity/lazy.py"", line 344, in __call__ last_branch.fast_process, args) File ""/usr/lib64/python2.7/site-packages/duplicity/robust.py"", line 38, in check_common_error return function(*args) File ""/usr/lib64/python2.7/site-packages/duplicity/patchdir.py"", line 614, in fast_process ropath.copy(self.base_path.new_index(index)) File ""/usr/lib64/python2.7/site-packages/duplicity/path.py"", line 445, in copy other.writefileobj(self.open(""rb"")) File ""/usr/lib64/python2.7/site-packages/duplicity/path.py"", line 622, in writefileobj fout = self.open(""wb"") File ""/usr/lib64/python2.7/site-packages/duplicity/path.py"", line 564, in open result = open(self.name, mode) IOError: [Errno 84] Invalid or incomplete multibyte or wide character: '/run/media/bernie/My Book/restore/public_html/projects/amiga/XModule/XModuleSrc/Catalogs/fran\xe7ais.ct' The file is named ""français.ct"" but using the latin-1 encoding of ""ç"". ```",28 118021906,2017-07-23 17:28:58.952,Allow per-directory include-filelist (lp:#1705934),"[Original report](https://bugs.launchpad.net/bugs/1705934) created by **Chris Hunt (chrahunt)** ``` Currently to include/exclude paths we must provide all exclusions to duplicity up front, when executing the command. This does not compose well, as individual directories within a source directory may have different and even conflicting requirements on what contained files should be backed up. It would be very convenient if we could specify a filename, then if duplicity sees a file with that name in a directory it is backing up, it will read it as if it had been passed to --include-filelist, with the exceptions: 1. Selection rules are interpreted relative to the directory of the file in which they were found 2. Selection rules apply only in the directory in which the matching file was found and child directories (recursively) 3. Selection rules are given priority the lower their originating file is in the directory hierarchy (assuming the root is at the top). Any selection rules provided on the command line or via --include-filelist would be applied last This would behave similar to .gitignore in git, and has similar benefits: 1. Keep information about relevant files to include/exclude closer to the impacted files 2. Move directories with special child file selection rules without requiring updating a global file with the new path 3. Eliminate the need for external tools to generate selection rules, and reduce extreme uses of the global selection rule list (e.g. 1576389) 4. More optimal behavior w.r.t. file selection since the selection rules that are most likely to apply to a file would be tested first ``` Original tags: wishlist",6 118021896,2017-07-22 21:31:10.841,duplicity fails to restore backup with key error: 1 on volume_name_dict (lp:#1705849),"[Original report](https://bugs.launchpad.net/bugs/1705849) created by **Hobson Lane (hobs)** ``` I upgraded a laptop from 16.04 to 17.04 and could not resolve the perpetual login prompt boot so I deleted my ubuntu partition reinstalled ubuntu 17.04 and did a boot recovery to restore the grub prompt for dual boot. However, when I attempted to restore my user data from a deja-dupe backup (from 16.04) I received the following python 2.7.13 Traceback error: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1532, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1526, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1380, in main do_backup(action) File ""/usr/bin/duplicity"", line 1457, in do_backup restore(col_stats) File ""/usr/bin/duplicity"", line 722, in restore restore_get_patched_rop_iter(col_stats)): File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", 560, in Write_ROPaths for ropath in rop_iter: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 523, in integrate_patch_iters for patch_seq in collated: File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 389, in yield_tuples setrorps(overflow, elems) File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 378, in setrorps elems[i] = iter_list[i].next() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 121, in difftar2path_iter tarinfo_list = [tar_iter.next()] File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 339, in next self.set_tarfile() File ""/usr/lib/python2.7/dist-packages/duplicity/patchdir.py"", line 333, in set_tarfile self.current_fp = self.fileobj_iter.next() File ""/usr/bin/duplicity"", line 758, in get_fileobj_iter backup_set.volume_name_dict[vol_num], KeyError: 1 Environment: Python 2.7.13 (default, Jan 19 2017, 14:48:08) [GCC 6.3.0 20170118] on linux2 duplicity 0.7.06 deja-dup 34.4 ```",6 118021884,2017-07-06 15:44:27.866,Only sync relevant remote metadata to prevent big cache (lp:#1702715),"[Original report](https://bugs.launchpad.net/bugs/1702715) created by **Markus (doits)** ``` Duplicity version: 0.7.13.1 Python version: 2.7.12 OS Distro and version: Ubuntu 16.04.2 Type of target filesystem: Multiple Hello, this is a follow up of http://lists.nongnu.org/archive/html/duplicity- talk/2017-07/msg00000.html (slightly modified). I've been using duplicity to backup since 2014 with a full backup every 3 months, so I have some secondary backup chains. Additionally backups go to multiple separate places (each having multiple backup chains). My local cache folder is now over 70gb big and I had the idea to delete the cache for the secondary chains to free up space - since they are only needed when restoring files, but not for new backups (to the primary chains). So I deleted the cache files (for example duplicity-full- signatures.20141106T121637Z.sigtar.gpg) and reclaimed a lot of space. But after the next backup run, I saw this in the logs: Synchronizing remote metadata to local cache... Copying duplicity-full-signatures.20141106T121637Z.sigtar.gpg to local cache. Copying duplicity-full-signatures.20150505T234205Z.sigtar.gpg to local cache. Copying duplicity-full-signatures.20150904T234123Z.sigtar.gpg to local cache. Copying duplicity-full-signatures.20151205T004128Z.sigtar.gpg to local cache. ... My current workaround is: Before I create a new full backup, I move the remote backup files to a new location. So for duplicity it looks like the backup location is fresh and it even deletes the local cache automatically, starting with a full backup (which I wanted anyways). If I later want to restore from that backup I will have to move the files back into place on that remote. It would be great to have a command line option to prevent syncing all metadata but the current one. Maybe it could even determine which metadata is needed for the current operation and only sync this one, so for example if I wanted to restore from a secondary backup chain it would only sync the secondary backup chain's metadata. So if metadata was deleted manually but not needed for the current operation, it would stay deleted. ``` Original tags: cache metadata",16 118021859,2017-07-06 14:10:31.197,Ignore Errors on restore (lp:#1702696),"[Original report](https://bugs.launchpad.net/bugs/1702696) created by **Robert Coup (rcoup)** ``` Situation: 1. full backup with good signatures/manifest but one or more corrupted volumes (eg. `echo ""horse"" > myarchive/duplicity- full.20170706T134837Z.vol9.difftar.gpg`) 2. Error messages along the lines of:         Invalid data - SHA1 hash mismatch for file:          duplicity-full.20170706T134837Z.vol9.difftar.gpg          Calculated hash: 7b64f0f207214f9894a2f4d08a95e57f3c773e72          Manifest hash: 90f449bb4662db4242caab58f509ca5354afb631 3. `--ignore-errors` doesn't ignore these The error comes from the de-GPG'ing code in restore_get_enc_fileobj(). We can ignore the errored volume (obviously ignoring the data in it) and continue restoring what we can by proceeding to the next volume. Proof-of-concept patch is attached. * Doesn't actually print the name of the missing/bad/etc files, just the volumes. The index we're restoring is stored a few levels further up the stack. * Does work for files which span multiple volumes including corrupted, but the specific files will be hosed/bad. * Doesn't work if directories don't exist (eg. corruption is in volume 1) but later volumes want to write to files in them. Presumably with --force and pre-creating you can workaround * No idea what happens if GPG is disabled or the file is missing, but presumably similar to handle. * No idea what happens with incrementals either :) Related: * https://bugs.launchpad.net/deja-dup/+bug/487720 ``` Original tags: restore",10 118021857,2017-06-23 07:19:47.589,Support Sia backup storage (lp:#1700005),"[Original report](https://bugs.launchpad.net/bugs/1700005) created by **Wernight (werner-beroux)** ``` *Sia* (http://sia.tech/get-started/) is relatively easy to setup for users but it seems to really shine for backup storage for now. The UI flow is shown at https://blog.sia.tech/getting-started-with-private- decentralized-cloud-storage-c9565dc8c854, but there is *Sia Daemon*  (http://sia.tech/apps/) (see API at https://blog.sia.tech/api-quickstart- guide-f1d160c05235). Why Sia? One arguable reason is that it's a distributed system with copies on 3 machines. Yes it's very new and linked to a crypto currency, but the real No.1 reason is the cost: About $0.16 / TB / month (https://www.reddit.com/r/siacoin/comments/5vl5an/sia_storage_costs_016_per_tb_per_month/). For a comparison:  - $10 /TB/mo Google Cloud Nearline storage  - $7 /TB/mo Google Cloud Cold storage  - $12 /TB/mo S3 Infrequent storage  - $4 /TB/mo S3 Galcier storage ```",16 118021854,2017-06-10 10:03:51.011,Does not preserve timestamp of symbolic links (lp:#1697157),"[Original report](https://bugs.launchpad.net/bugs/1697157) created by **Phil Ruffwind (rufflewind)** ``` It seems that duplicity does not preserve the timestamp (specifically modification time) of symbolic links. Test case: 1. Create some symbolic links in a directory A (possibly broken links, doesn't matter). Wait a minute or so. 2. duplicity A file://B 3. duplicity file://B C Expected: same timestamp shown with ls -l as in directory A Actual: timestamp of symbolic links is the time of restore (step 3) instead Does not affect normal files, just symbolic links. duplicity 0.7.12 Python 3.6.1 Linux 4.11.3-1-ARCH ```",6 118021852,2017-06-09 11:55:45.305,ftpes:// not docummented (lp:#1696977),"[Original report](https://bugs.launchpad.net/bugs/1696977) created by **Yajo (yajo)** ``` I was becoming crazy setting up an ftpes:// backup, and the main problem was that I was using ftps:// instead. duplicity man page says nothing about ftpes support, nor does the website. I had to guess that by https://bugs.launchpad.net/duplicity/+bug/1072130. Would be great if you add it, so next one does not become crazy :) FTR: differences: https://www.cerberusftp.com/support/help/ftp-support/ ```",6 118021839,2017-05-31 17:38:47.781,azure backend not working with latest Azure Storage SDK for Python (lp:#1694770),"[Original report](https://bugs.launchpad.net/bugs/1694770) created by **Rafal Sosinski (rafal.sosinski)** ``` There is a problem using latest version of duplicity (0.7.12) with latest Azure Storage SDK for Python (2.0.0) for access Azure resources root@restore:~# duplicity --file-to-restore data/Capability --no-encryption azure://backup05162017 /mnt -v9 Using archive dir: /root/.cache/duplicity/851a65c225c23ac6257e3fb3ed9061aa Using backup name: 851a65c225c23ac6257e3fb3ed9061aa GPG binary is gpg, version 1.4.20 Import of duplicity.backends.acdclibackend Succeeded Import of duplicity.backends.azurebackend Succeeded Import of duplicity.backends.b2backend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cfbackend Succeeded Import of duplicity.backends.copycombackend Succeeded Import of duplicity.backends.dpbxbackend Failed: No module named dropbox Import of duplicity.backends.gdocsbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.hubicbackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.lftpbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.mediafirebackend Succeeded Import of duplicity.backends.megabackend Succeeded Import of duplicity.backends.multibackend Succeeded Import of duplicity.backends.ncftpbackend Succeeded Import of duplicity.backends.onedrivebackend Succeeded Import of duplicity.backends.par2backend Succeeded Import of duplicity.backends.pydrivebackend Succeeded Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.ssh_paramiko_backend Succeeded Import of duplicity.backends.ssh_pexpect_backend Succeeded Import of duplicity.backends.swiftbackend Succeeded Import of duplicity.backends.sxbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Using temporary directory /tmp/duplicity-omCmzF-tempdir Backend error detail: Traceback (most recent call last): File ""/usr/bin/duplicity"", line 1546, in with_tempdir(main) File ""/usr/bin/duplicity"", line 1540, in with_tempdir fn() File ""/usr/bin/duplicity"", line 1375, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1140, in ProcessCommandLine backup, local_pathname = set_backend(args[0], args[1]) File ""/usr/lib/python2.7/dist-packages/duplicity/commandline.py"", line 1015, in set_backend globals.backend = backend.get_backend(bend) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 223, in get_backend obj = get_backend_object(url_string) File ""/usr/lib/python2.7/dist-packages/duplicity/backend.py"", line 209, in get_backend_object return factory(pu) File ""/usr/lib/python2.7/dist- packages/duplicity/backends/azurebackend.py"", line 53, in __init__ Exception: %s"""""" % str(e)) BackendException: Azure backend requires Microsoft Azure Storage SDK for Python (https://pypi.python.org/pypi/azure-storage/). Exception: cannot import name BlobService Required package is already installed root@restore:~# pip freeze adal==0.4.5 asn1crypto==0.22.0 azure-common==1.1.6 azure-nspkg==2.0.0 azure-storage==0.34.2 certifi==2017.4.17 cffi==1.10.0 cryptography==1.8.1 duplicity===-version enum34==1.1.6 futures==3.1.1 idna==2.5 ipaddress==1.0.18 isodate==0.5.4 keyring==10.3.2 lockfile==0.12.2 msrest==0.4.8 oauthlib==2.0.2 packaging==16.8 pathlib2==2.2.1 pexpect==4.0.1 ptyprocess==0.5 pycparser==2.17 pycrypto==2.6.1 PyJWT==1.5.0 pyparsing==2.2.0 python-dateutil==2.6.0 requests==2.14.2 requests-oauthlib==0.8.0 scandir==1.5 SecretStorage==2.3.1 six==1.10.0 There is no error when using old version of azure package (0.11.1) Problem is on Linux (Debian, Ubuntu, RHEL and CentOS) ```",34 118021833,2017-05-31 17:10:08.324,verify/download backup fails when reaching b2 daily cap (lp:#1694760),"[Original report](https://bugs.launchpad.net/bugs/1694760) created by **Emanuele Santoro (znpy)** ``` This happened to me when verifying a backup with a daily cap set to default (1GB data transfer cap) ```",8 118021828,2017-05-09 08:11:43.570,Amazon Drive backend fails after 1 hour (lp:#1689501),"[Original report](https://bugs.launchpad.net/bugs/1689501) created by **Rojetto (rojetto)** ``` I'm using duplicity 0.7.12 with the Amazon Drive backend adbackend.py (revision 1218) from series 0.8 installed. When backing up a larger amount of data, the first few volumes upload fine, but the upload for later volumes starts to fail after exactly 1 hour with 'HTTP Error: 401 Client Error: Unauthorized'. I don't know how this stuff exactly works, but it seems to me like the backend script requests an OAuth token just at the beginning of the process, that simply expires after a while. Maybe refreshing the token before every volume upload would be an option? ``` Original tags: amazon authorization backend drive token unauthorized",6 118021826,2017-05-08 15:12:14.782,original filename is wrongly stored in par2 file (lp:#1689324),"[Original report](https://bugs.launchpad.net/bugs/1689324) created by **Marian Sigler (maix42)** ``` Since a while (I don't exactly know which version since I skipped some versions due to another bug) when using par2 (specifically, I use par2+pexpect+sftp://), the filename given to par2 includes a path (archive_dir/.../$filename.gpg instead of just $filename.gpg), which causes this error File is corrupt. Try to repair duplicity- inc.20170418T124107Z.to.20170425T125712Z.manifest.gpg Failed to repair duplicity- inc.20170418T124107Z.to.20170425T125712Z.manifest.gpg Demonstrated running `strings` on the par2 files (NAME is the `--name` parameter, obfuscated for privacy; `cache/` is my `--archive-dir`) earlier (good) duplicity-inc.20170204T171430Z.to.20170221T140208Z.manifest.gpg.par2: duplicity-inc.20170204T171430Z.to.20170221T140208Z.manifest.gpg duplicity-inc.20170221T140208Z.to.20170403T131919Z.manifest.gpg.par2: duplicity-inc.20170221T140208Z.to.20170403T131919Z.manifest.gpg now (bad) duplicity-inc.20170418T124107Z.to.20170425T125712Z.manifest.gpg.par2: cache/NAME/duplicity_temp.1/duplicity- inc.20170418T124107Z.to.20170425T125712Z.manifest.gpg duplicity-inc.20170425T125712Z.to.20170508T143651Z.manifest.gpg.par2: cache/NAME/duplicity_temp.1/duplicity- inc.20170425T125712Z.to.20170508T143651Z.manifest.gpg par2cmdline version 0.7.0 duplicity 0.7.12 Linux 4.10.11-1-ARCH x86_64 GNU/Linux ```",6 118019406,2017-05-05 21:30:56.192,Split out verify and compare-data commands (and remove target for verify) (lp:#1688657),"[Original report](https://bugs.launchpad.net/bugs/1688657) created by **Aaron Whitehouse (aaron-whitehouse)** ``` As per: http://lists.gnu.org/archive/html/duplicity-talk/2017-01/msg00065.html ""I would imagine that a more readily understandable syntax would be a duplicity ""verify"" command with just one argument (duplicity verify ... source_url), and another duplicity ""compare"" command that would take two arguments (duplicity compare ... source_url target_directory [--compare- data]) and compare backup against the target_directory's files size/date and optionally content (when used with --compare-data)."" I agree. I am about to do some work on the command line parsing side (Bug #1480565), so can add this to my list. As ede said in that thread, the verify command has evolved unhelpfully over time and the need for a target in verify is a hangover from that. ```",8 118021817,2017-04-30 14:16:03.485,use-agent issue with version 0.7.12 (lp:#1687291),"[Original report](https://bugs.launchpad.net/bugs/1687291) created by **Éric Lemoine (elemoine)** ``` I use the following command for backups: duplicity --verbosity info --full-if-older-than 1M --sign-key 5BBF59DF126FADEF --encrypt-key 57F334375840CA38 --use-agent --exclude- filelist excludes.txt /home/elemoine file:///media/usb/backup It fails with duplicity 0.7.12. This is the GPG error I get: GPG error detail: Traceback (most recent call last): File ""/home/elemoine/.virtualenvs/duplicity/bin/duplicity"", line 1546, in with_tempdir(main) File ""/home/elemoine/.virtualenvs/duplicity/bin/duplicity"", line 1540, in with_tempdir fn() File ""/home/elemoine/.virtualenvs/duplicity/bin/duplicity"", line 1391, in main do_backup(action) File ""/home/elemoine/.virtualenvs/duplicity/bin/duplicity"", line 1521, in do_backup check_last_manifest(col_stats) # not needed for full backup File ""/home/elemoine/.virtualenvs/duplicity/bin/duplicity"", line 1222, in check_last_manifest last_backup_set.check_manifests() File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/collections.py"", line 199, in check_manifests remote_manifest = self.get_remote_manifest() File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/collections.py"", line 234, in get_remote_manifest manifest_buffer = self.backend.get_data(self.remote_manifest_name) File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/backend.py"", line 679, in get_data assert not fin.close() File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/dup_temp.py"", line 226, in close assert not self.fileobj.close() File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/gpg.py"", line 279, in close self.gpg_failed() File ""/home/elemoine/.virtualenvs/duplicity/local/lib/python2.7/site- packages/duplicity/gpg.py"", line 246, in gpg_failed raise GPGError(msg) GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: Sorry, we are in batchmode - can't get input ===== End GnuPG log ===== GPGError: GPG Failed, see log below: ===== Begin GnuPG log ===== gpg: Sorry, we are in batchmode - can't get input ===== End GnuPG log ===== The same command completes without errors with 0.7.11. It sounds like a gpg-agent/pinentry related issue. Is this a regression? Do I need to change something in my Duplicity or/and GPG configuration? This is what my gpg-agent.conf looks like: # set the defaut cache time to 2 hours default-cache-ttl 7200 # set the max cache time to 8 hours max-cache-ttl 28800 # use pinentry-gtk-2 pinentry-program /usr/bin/pinentry-gtk-2 # do not allow external cache no-allow-external-cache Information: duplicity version: 0.7.12 python version: 2.7.13 OS: Debian Sid ```",12 118021802,2017-04-27 09:04:59.975,duplicity shouldn't sync so much (lp:#1686643),"[Original report](https://bugs.launchpad.net/bugs/1686643) created by **Martin Nowak (dawgfoto)** ``` Too many commands of duplicity first sync manifests and signatures to the local cache, even though they could just work with remote data. Since signature files can become multiple GB, this is fairly annoying, e.g. when checking collection-status from a dev machine or before removing old backups. Syncing should only be done when it's necessary, or alternatively there should be at least an option to skip syncing. ``` Original tags: enhancement",10 118021799,2017-04-24 12:36:15.123,Provide pip package (lp:#1685784),"[Original report](https://bugs.launchpad.net/bugs/1685784) created by **Yajo (yajo)** ``` Installing duplicity is awesomely hard because one has to guess dependencies to get all plugins working. Please add ability to do `pip install duplicity` and get a fully working and updated duplicity system. ```",6 118021774,2017-03-21 21:41:40.559,error uploading manifest file on incremental backup (lp:#1674833),"[Original report](https://bugs.launchpad.net/bugs/1674833) created by **ljesmin (lauri-jesmin)** ``` Have a problem with duplicity incremental backups to S3. I do backup from about 30 G of small files, with exclusion of some files by pattern to S3, eu-central-1 bucket. I use duplicity 0.7.11 from EPEL on CentOS 7.3. Python is 2.7.5. Full backups work fine, but incremental backups immediately after full backup generate errors. Full backup command line: duplicity full --no-encryption --s3-use-multiprocessing --s3-multipart-chunk-size 10 --num-retries 5 --exclude ""**typo3temp/**"" --exclude ""**_temp/**"" --exclude ""**_processed_/**"" /var/www ""s3://s3.eu-central-1.amazonaws.com/backup-bucket-name/www"" Incremental backup command line: duplicity incremental --no-encryption --s3-use-multiprocessing --s3-multipart-chunk-size 10 --num-retries 5 --exclude ""**typo3temp/**"" --exclude ""**_temp/**"" --exclude ""**_processed_/**"" /var/www ""s3://s3.eu-central-1.amazonaws.com/backup-bucket-name/www"" Output on erratic incremental backup: Local and Remote metadata are synchronized, no sync needed. Last full backup date: Sun Mar 19 02:15:10 2017 Giving up after 1 attempts. OSError: [Errno 2] No such file or directory: '/root/.cache/duplicity/652b29dc986d2c5850695bd80bf5261a/duplicity-inc. 20170319T001510Z.to.20170321T001455Z.manifest' After backup, in /root/.cache/duplicity/652b29dc986d2c5850695bd80bf5261a/ there are such files: duplicity-full.20170319T001510Z.manifest duplicity-new-signatures.20170319T001510Z.to.20170321T001455Z.sigtar.gz duplicity-inc.20170319T001510Z.to.20170321T001455Z.manifest.part In AWS bucket are from latest backup such files: www/duplicity-inc.20170319T001510Z.to.20170321T001455Z.vol1.difftar.gz www/duplicity-new-signatures.20170319T001510Z.to.20170321T001455Z.sigtar.gz And manifest file is missing. Seems that for some reason, manifest is created as .part file and upload function does not use this file. ```",6 118021761,2017-03-20 16:59:47.399,"DiffDirException(""Bad tarinfo name %s"" % (tiname,)) (lp:#1674423)","[Original report](https://bugs.launchpad.net/bugs/1674423) created by **Fabio (fabiot2)** ``` Hi unfortunately I'm getting this error on my Mac ( brew install duplicity ) : Main action: list-current ================================================================================ duplicity 0.7.11 (December 31, 2016) Args: /usr/local/bin/duplicity list-current-files --verbosity 5 sftp://USER@USER.your- storagebox.de:/backups/cloud.terradue.int/data/cloud/one/datastores/1/2727f8942d3b99beabda9ef7148b07ae/2017-03-10_08:47 Darwin Fabios-MacBook-Pro.local 16.4.0 Darwin Kernel Version 16.4.0: Thu Dec 22 22:53:21 PST 2016; root:xnu-3789.41.3~3/RELEASE_X86_64 x86_64 i386 /usr/local/Cellar/duplicity/0.7.11/libexec/bin/python 2.7.10 (default, Jul 30 2016, 19:40:32) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] ================================================================================ Local and Remote metadata are synchronized, no sync needed. Processing local manifest /Users/martinelli_f/.cache/duplicity/8323b109d215f9e4e23b0928fd4f2a4e/duplicity- full.20170310T074759Z.manifest (831) Found 6 volumes in manifest Last full backup date: Fri Mar 10 08:47:59 2017 Using temporary directory /var/folders/ct/n230dw6n09d74yzwc3zp1j7w0000gn/T/duplicity-iBFZjG-tempdir Traceback (most recent call last): File ""/usr/local/bin/duplicity"", line 1546, in with_tempdir(main) File ""/usr/local/bin/duplicity"", line 1540, in with_tempdir fn() File ""/usr/local/bin/duplicity"", line 1391, in main do_backup(action) File ""/usr/local/bin/duplicity"", line 1472, in do_backup list_current(col_stats) File ""/usr/local/bin/duplicity"", line 707, in list_current for path in path_iter: File ""/usr/local/Cellar/duplicity/0.7.11/libexec/lib/python2.7/site- packages/duplicity/diffdir.py"", line 350, in combine_path_iters triple_list = [x for x in map(get_triple, range(len(path_iter_list))) if x] File ""/usr/local/Cellar/duplicity/0.7.11/libexec/lib/python2.7/site- packages/duplicity/diffdir.py"", line 327, in get_triple path = path_iter_list[iter_index].next() File ""/usr/local/Cellar/duplicity/0.7.11/libexec/lib/python2.7/site- packages/duplicity/diffdir.py"", line 247, in sigtar2path_iter raise DiffDirException(""Bad tarinfo name %s"" % (tiname,)) DiffDirException: Bad tarinfo name signature despite of : Main action: collection-status ================================================================================ duplicity 0.7.11 (December 31, 2016) Args: /usr/local/bin/duplicity collection-status --verbosity 5 sftp://USER@USER.your- storagebox.de:/backups/cloud.terradue.int/data/cloud/one/datastores/1/2727f8942d3b99beabda9ef7148b07ae/2017-03-10_08:47 Darwin Fabios-MacBook-Pro.local 16.4.0 Darwin Kernel Version 16.4.0: Thu Dec 22 22:53:21 PST 2016; root:xnu-3789.41.3~3/RELEASE_X86_64 x86_64 i386 /usr/local/Cellar/duplicity/0.7.11/libexec/bin/python 2.7.10 (default, Jul 30 2016, 19:40:32) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] ================================================================================ Local and Remote metadata are synchronized, no sync needed. Processing local manifest /Users/martinelli_f/.cache/duplicity/8323b109d215f9e4e23b0928fd4f2a4e/duplicity- full.20170310T074759Z.manifest (831) Found 6 volumes in manifest Last full backup date: Fri Mar 10 08:47:59 2017 Collection Status ----------------- Connecting with backend: BackendWrapper Archive dir: /Users/martinelli_f/.cache/duplicity/8323b109d215f9e4e23b0928fd4f2a4e Found 0 secondary backup chains. Found primary backup chain with matching signature chain: ------------------------- Chain start time: Fri Mar 10 08:47:59 2017 Chain end time: Fri Mar 10 08:47:59 2017 Number of contained backup sets: 1 Total number of contained volumes: 6 Type of backup set: Time: Num volumes: Full Fri Mar 10 08:47:59 2017 6 ------------------------- No orphaned or incomplete backup sets found. Using temporary directory /var/folders/ct/n230dw6n09d74yzwc3zp1j7w0000gn/T/duplicity-secJnR-tempdir kindly what may I check ? thank you very much, Fabio ```",6 118021759,2017-03-11 19:41:45.349,Unicode characters in filenames are not restored correctly (lp:#1672077),"[Original report](https://bugs.launchpad.net/bugs/1672077) created by **Zachs Kappler (jet-metalsonic500)** ``` While restoring my music folder, I noticed that files or folders with Unicode characters in their names are being restored just fine, but their Unicode characters in the filename get replaced with the code used to represent them. An example of what is happening: Original Filename: Gangnam Style (강남스타일) Restored Filename: Gangnam Style (uac15ub0a8uc2a4ud0c0uc77c) Additionally, it only happens to the files I have that contain Japanese or Korean characters, but not symbols such as ♥ or ♘. I am using deja-dup 34.2-0ubuntu1.1 on elementaryOS 0.4 ```",6 118019103,2017-03-05 16:45:19.769,GPG Key Handling (lp:#1670151),"[Original report](https://bugs.launchpad.net/bugs/1670151) created by **Kenneth Loafman (kenneth-loafman)** ``` In order to provide the best security, duplicity should use the key fingerprint when talking with the gpg process. It should allow the user to specify the short key (8-char), long key (16-char) and the fingerprint (40-char) on the command line and convert to fingerprint when needed. This follows GNU's best practices for handling keys between processes. ```",6 118021756,2017-02-23 21:37:50.524,Backend requirements are not installed (lp:#1667487),"[Original report](https://bugs.launchpad.net/bugs/1667487) created by **Adrien Delhorme (adrien-delhorme)** ``` Some backends have module dependencies (for example pyrax) that are not installed with duplicity. There is a requirements.txt file in the project but the setup.py does not use this it to install dependencies. Why not pass the content of requirements.txt file to setup.py's install_require argument, or to avoid installing all backends dependencies, to extras_require argument? ```",6