https://t.me/RX1948
Server : Apache
System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64
User : dh_edsupp ( 6597262)
PHP Version : 8.2.26
Disable Function : NONE
Directory :  /lib/python3/dist-packages/duplicity/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/duplicity/__pycache__/dup_main.cpython-310.pyc
o

�z}a*�@s�ddlmZddlmZe��ddlmZddlmZddlmZddlm	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlZddlZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddl#Zddl#m$Z$ddl%mZddl&m&Z&da'dd�Z(dVdd�Z)d d!�Z*d"d#�Z+d$d%�Z,d&d'�Z-d(d)�Z.d*d+�Z/d,d-�Z0d.d/�Z1d0d1�Z2d2d3�Z3d4d5�Z4d6d7�Z5d8d9�Z6d:d;�Z7d<d=�Z8d>d?�Z9d@dA�Z:dBdC�Z;dDdE�Z<dFdG�Z=dHdI�Z>dJdK�Z?dLdM�Z@ejAfdNdO�ZBGdPdQ�dQe�ZCdRdS�ZDdTdU�ZEdS)W�)�print_function)�standard_library)�map)�next)�object)�rangeN)�__version__)�asyncscheduler)�commandline)�diffdir)�dup_collections)�dup_temp)�dup_time)�file_naming)�config)�gpg)�log��manifest)�patchdir)�path)�progress)�tempdir)�util)�BadVolumeException)�datetimecCs6ddl}ddl}tjjdkr|�|��d�}|�|�S)Nr��replace)�getpass�locale�sys�version_info�major�encode�getpreferredencoding)�messagerr�r&�4/usr/lib/python3/dist-packages/duplicity/dup_main.py�getpass_safeHs

r(FcCsPz|r	tjdWStjdWStyYnw|r;tjjtjjvs*tjjtjjvr;dtjvr;t�	t
d��tjdS|s^tjjtjjvsMtjjtjjvr^dtjvr^t�	t
d��tjdStjrdtjrfdS|dvrldS|dkr�tjjsxtjjr�tjjr�tj
s�|s�dS|dkr�tjjs�tjjr�tjjr�tj
s�|s�dSt�t
d	��d
}	|r�|dkr�|r�tjj}n.tjj}n)|r�|r�tjjr�tjj}ntt
d
�d�}n|r�tjjr�tjj}ntt
d�d�}|dkr�|}n|r�tt
d��}ntt
d��}||k�stjt
d�tjd
d�d}q�|�s&tjj�s&tjj�s&|�s&tjt
d�tjd
d�d}q�|S)a^
    Check to make sure passphrase is indeed needed, then get
    the passphrase from environment, from gpg-agent, or user

    If n=3, a password is requested and verified. If n=2, the current
    password is verified. If n=1, a password is requested without
    verification for the time being.

    @type  n: int
    @param n: verification level for a passphrase being requested
    @type  action: string
    @param action: action to perform
    @type  for_signing: boolean
    @param for_signing: true if the passphrase is for a signing key, false if not
    @rtype: string
    @return: passphrase
    �SIGN_PASSPHRASE�
PASSPHRASEz.Reuse configured PASSPHRASE as SIGN_PASSPHRASEz.Reuse configured SIGN_PASSPHRASE as PASSPHRASE�)�collection-status�list-current�remove-all-but-n-full�remove-all-inc-of-but-n-full�
remove-old�full�incz)PASSPHRASE variable not set, asking user.T�rz!GnuPG passphrase for signing key:� z GnuPG passphrase for decryption:z.Retype passphrase for signing key to confirm: z-Retype passphrase for decryption to confirm: z=First and second passphrases do not match!  Please try again.��force_printFzICannot use empty passphrase with symmetric encryption!  Please try again.)�os�environ�KeyErrorr�gpg_profile�sign_key�
recipients�hidden_recipientsr�Notice�_�
encryption�	use_agent�restart�Info�signing_passphrase�
passphraser(�Log�WARNING)�n�action�for_signing�	use_cache�pass1�pass2r&r&r'�get_passphraseRs��




����������





���
�rNcCs>zt|�r
	t|�sWn	tyYnwt�dtjj�dS)z�
    Fake writing to backend, but do go through all the source paths.

    @type tarblock_iter: tarblock_iter
    @param tarblock_iter: iterator for current tar block

    @rtype: int
    @return: constant 0 (zero)
    Nr)r�
StopIterationr�Progressr�stats�SourceFileSize)�
tarblock_iterr&r&r'�dummy_backup�s
���rTc
Cs�tjj}tjj}zKt|�}|rQ|j|kr&|s|jsWdS|r&|j|kr&WdS|j|krHt�t	d�t
�|�t
�|j�ftjj
�|�|�WdSt|�}|sWdSWdStyrt�t	d�t
�|�t
�|j�ftjj
�YdSw)ay
    Fake writing to backend, but do go through all the source paths.
    Stop when we have processed the last file and block from the
    last backup.  Normal backup will proceed at the start of the
    next volume in the set.

    @type tarblock_iter: tarblock_iter
    @param tarblock_iter: iterator for current tar block

    @rtype: int
    @return: constant 0 (zero)
    z>File %s complete in backup set.
Continuing restart on file %s.z=File %s missing in backup set.
Continuing restart on file %s.N)rrB�
last_index�
last_blockr�previous_index�previous_blockr�Warnr?r�uindex�	ErrorCode�restart_file_not_found�queue_index_datarO)rSrUrV�iter_resultr&r&r'�restart_position_iterator�s6




��
�
�
��r_cs�dd�}�fdd����fdd���fdd�}tjs'd	}tj|d
�}|��nKtjj��}tj�|�tj�|�tj	sGtj
sG|tjj|�nt�t
d��||_tjj}	t�t
d�tjjt�tjj�tjjf�tjj}t|�d	}
d	}tjr�tj�|d
�tj��tjd
ks�J�t�tj�}g}
|
�sD|��|d
7}tj �|tj!tj"d�}t#�$t�%|��}tj!r�t&�'||j(tj)tj*�}
ntj"r�t&�+||j(tj*�}
n	t&�,||j(tj*�}
|�-�t�.�}|j/|g||��R�|�0dt&�1d|��|�2|�|d
k�r|�3�|�3�n|�4�|�4�|
�5|�6�fdd�|||f��t�7t
d�|t8j9j:�tj�r6tj�;|�tj<|k�sBJd|��|
r�|
D]}||�7}�qF|�=t8j9�>��|S)a�
    Encrypt volumes of tarblock_iter and write to backend

    backup_type should be "inc" or "full" and only matters here when
    picking the filenames.  The path_prefix will determine the names
    of the files written to backend.  Also writes manifest file.
    Returns number of bytes written.

    @type backup_type: string
    @param backup_type: type of backup to perform, either 'inc' or 'full'
    @type tarblock_iter: tarblock_iter
    @param tarblock_iter: iterator for current tar block
    @type backend: callable backend object
    @param backend: I/O backend for selected protocol

    @rtype: int
    @return: bytes written
    cSs\|��\}}|durd}d}|r|d8}|��\}}|dur"|}|}|r(|d8}||||fS)z3Return start_index and end_index of previous volumeNr&r3)�recall_index�get_previous_index)rS�start_index�start_block�	end_index�	end_blockr&r&r'�get_indicies%sz$write_multivol.<locals>.get_indiciesc	s���|g�|}|d}|durdStdtjd�D]4}��|g�|}|d}||kr.n!|dur5dSt�td�t��|t	�
|�|f�t�d|�q||krodt	�
|�||f}t�
td�t	�|�tjj|�dSdS)N�sizer3zD%s Remote filesize %d for %s does not match local size %d, retrying.rz%s %d %dz$File %s was corrupted during upload.)�
query_inforr�num_retriesrr>r?r�nowr�escape�time�sleep�
FatalError�fsdecoder[�volume_wrong_size)�	orig_size�
dest_filename�inforg�attempt�
code_extra)�backendr&r'�validate_block5s*���z&write_multivol.<locals>.validate_blockcs:|��}tj|kr��||��||�|jr|��|S)z�
        Retrieve file size *before* calling backend.put(), which may (at least
        in case of the localbackend) rename the temporary file to the target
        instead of copying.
        )�getsizer�skip_volume�put�stat�delete)�tdprr�vol_num�putsize)rvrwr&r'rzIs

zwrite_multivol.<locals>.putcs~tjjstjjrtjjsdStj�dtjtjd�}||j	dkr*t
�td�t
j
j�tjr=ttj||jd�}|��dSdS)a�
        When restarting a backup, we have no way to verify that the current
        passphrase is the same as the one used for the beginning of the backup.
        This is because the local copy of the manifest is unencrypted and we
        don't need to decrypt the existing volumes on the backend.  To ensure
        that we are using the same passphrase, we manually download volume 1
        and decrypt it with the current passphrase.  We also want to confirm
        that we're using the same encryption settings (i.e. we don't switch
        from encrypted to non in the middle of a backup chain), so we check
        that the vol1 filename on the server matches the settings of this run.
        Nr3��	encrypted�gzippedzQRestarting backup, but current encryption settings do not match original settings)rr:r<r=r;r�getr@�compression�volume_name_dictrrnr?r[�enryption_mismatch�restore_get_enc_fileobjrv�volume_info_dict�close)�
backup_setr�
vol1_filename�fileobj)�backup_typer&r'�validate_encryption_settingsWs$��
���z4write_multivol.<locals>.validate_encryption_settingsr��fhz:Skipping encryption validation due to glacier/deep storagez-Restarting after volume %s, file %s, block %sr3r��SHA1cs�|||�S�Nr&)r}rrr~)rzr&r'�<lambda>�sz write_multivol.<locals>.<lambda>zProcessed volume %dz)Forced assertion for testing at volume %d)?rrBr�Manifest�set_dirinfo�last_backup�get_local_manifest�
checkManifest�setLastSaved�s3_use_deep_archive�s3_use_glacierrrYr?r�rVr>�	start_volrrZrUr_r�tracker�set_start_volume�progress_thread�start�async_concurrencyr	�AsyncScheduler�remember_next_indexrr�r@r�r
�new_tempduppath�parser�GPGWriteFile�namer:�volsize�
GzipWriteFile�PlainWriteFile�setdata�
VolumeInfo�set_info�set_hash�get_hash�add_volume_info�
to_partial�flush�append�
schedule_taskrPrrQrR�snapshot_progress�fail_on_volume�set_files_changed_info�get_delta_entries_file)r�rS�	man_outfp�	sig_outfprvrfr�r~�mfrV�at_end�
bytes_written�io_scheduler�
async_waitersrrr}�vi�waiterr&)rvr�rzrwr'�write_multivols�!

��

��


��4r�cCsZ|dks
|dks
J�tj|ddd�}tj|dd�}tj|dtjd�}t�tj|||�}|S)a5
    Return a fileobj opened for writing, save results as manifest

    Save manifest in config.archive_dir_path gzipped.
    Save them on the backend encrypted as needed.

    @type man_type: string
    @param man_type: either "full" or "new"

    @rtype: fileobj
    @return: fileobj opened for writing
    r1r2T)r�partialr)rr�)rr�rr@r
�get_fileobj_duppath�archive_dir_path)r��part_man_filename�perm_man_filename�remote_man_filenamer�r&r&r'�get_man_fileobj�s$
����r�cCsX|dvsJ�tj|ddd�}tj|dd�}tj|tjtjd�}tjtj|||dd�}|S)a;
    Return a fileobj opened for writing, save results as signature

    Save signatures in config.archive_dir gzipped.
    Save them on the backend encrypted as needed.

    @type sig_type: string
    @param sig_type: either "full-sig" or "new-sig"

    @rtype: fileobj
    @return: fileobj opened for writing
    ��full-sig�new-sigFT)r�r�)r�r�)�	overwrite)rr�rr@r�r
r�r�)�sig_type�part_sig_filename�perm_sig_filename�remote_sig_filenamer�r&r&r'�get_sig_fileobjs$
��
��r�cCs$tjr#t��t_t�tj�}t|�tj�tj	d�t
��t��t_
tjr7t�tj�}t|�}|jdd�nStd�}td�}t�tj|�}td|||tj�}|��|��|��|��|��|��tjr�dtj
_tj
��t�ddtjjtj��tjjd�|jdd�ttj	|�dS)	z�
    Do full backup of directory to backend, using archive_dir_path

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    TN��sig_chain_warningr�r1�Y@rF) rr�ProgressTrackerr�r�DirFull�selectrT�set_evidencerQr
�
set_selection�LogProgressThreadr��dry_run�
set_valuesr�r��DirFull_WriteSigr�rvr��	to_remote�to_final�finished�joinr�TransferProgress�total_bytecount�total_elapsed_seconds�speed�print_statistics)�	col_statsrSr�r�r�r&r&r'�full_backup!sF


��
�r�cCs@|jstjrt�td�tjj�dSt�td��dS|jdS)z�
    Get last signature chain for inc backup, or None if none available

    @type col_stats: CollectionStatus object
    @param col_stats: collection status
    zdFatal Error: Unable to start incremental backup.  Old signatures not found and incremental specifiedz.No signatures found, switching to full backup.Nr)	�matched_chain_pairr�incrementalrrnr?r[�inc_without_sigsrY�r�r&r&r'�check_sig_chain[s
��
r�cCs8tjr|tj_tj�td��}tj|tj	dd�dSdS)zp
    If config.print_statistics, print stats after adding bytes_written

    @rtype: void
    @return: void
    zBackup StatisticsTr5N)
rr�rrQ�TotalDestinationSizeChange�get_stats_logstringr?rrF�NOTICE)rQr��	logstringr&r&r'r�ms
�r�cCsbtjs"t�|j�tjtjkr"t�d�t�	�tjtjks"Jd��tj
rHt
��t
_t
�tj|���}t|�t
j�t
jd�t��t
��t
_tjrYt
�tj|���}t|�}nPtd�}td�}t
�tj|��|�}td|||tj�}|��|��|� �|��|��|� �tj
r�dt
j_!t
j�"�t#�$ddt
jj%t
j�&�t
jj'd�t(t
j|�d	S)
zs
    Do incremental backup of directory to backend, using archive_dir_path

    @rtype: void
    @return: void
    rzBtime not moving forward at appropriate pace - system clock issues?Fr�r2Tr�rN))rrBr�setprevtime�end_time�curtime�prevtimerlrm�
setcurtimerr�r�r�DirDeltar��get_fileobjsrTr�rQr
r�r�r�r�r�r��DirDelta_WriteSigr�rvr�r�r�r�r�rr�r�r�r�r�)�	sig_chainrSr��
new_sig_outfp�
new_man_outfpr&r&r'�incremental_backupzs\
�
�
�
��
�r�cCs�tjptj}|�|�}t�|�|��}|D]5}|jdkrJdt�	|�
��t�|�
��f}dt�|�
��t�|�
��|jf}t�|tjtjj|d�qdS)z�
    List the files current in the archive (examining signature only)

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    �deleted�%s %sz%s %s %sTN)r�restore_timerr��get_signature_chain_at_timer�get_combined_path_iterr��difftype�timetopretty�getmtimerro�get_relative_path�timetostringrk�typerrF�INFO�InfoCode�	file_list)r�rlr��	path_iterr�	user_info�log_infor&r&r'�list_current�s"


�����rcCsjtjr	t|�dSt�tjt|��s3tjr't�t	d�t
�tj�tjj
�dSt�t	d�tjj�dSdS)z�
    Restore archive in config.backend to config.local_path

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    Nz,%s not found in archive - no files restored.z-No files found in archive - nothing restored.)rr��restore_get_patched_rop_iterr�
Write_ROPaths�
local_path�restore_dirrrnr?rror[�restore_dir_not_found�no_restore_filesr�r&r&r'�restore�s 
�

��
��rc
s$tjrttj�d���nd�tjptj}|�|�}|s J|j��|�	|�}d�|D]}�t
|�7�q)dg����fdd�}ttjd�sFtj
r}g}|D]}|��}|���}	|	D]
}
|�|j|
�qWqJtj
rwt�dd�d	d
�|D���dStj�|�tt||��}tttj|��}t�|��S)z�
    Return iterator of patched ROPaths of desired restore data

    @type col_stats: CollectionStatus object
    @param col_stats: collection status
    �/r&rc3s��|��}|���}|D]A}zt|j|j||j|�VWnty3}z	|VWYd}~nd}~ww�dd7<t�t	d��d�f�d��qdS)z<Get file object iterator from backup_set contain given indexNrr3zProcessed volume %d of %d)
�get_manifest�get_containing_volumesr�rvr�r�rrrPr?)r�r�volumesr~�e��cur_vol�index�num_volsr&r'�get_fileobj_iter�s$�

�����z6restore_get_patched_rop_iter.<locals>.get_fileobj_iter�pre_process_download_batchzRequired volumes to restore:
	z
	css�|]}|��VqdSr�)�decode)�.0�	file_namer&r&r'�	<genexpr>s�z/restore_get_patched_rop_iter.<locals>.<genexpr>N)rr�tuple�splitrrr��get_backup_chain_at_time�all_backup_chains�get_sets_at_time�len�hasattrrvr�rrr�r�rr>r�r"�listrr�TarFile_FromFileobjs�tarfiles2rop_iter)
r�rl�backup_chain�backup_setlist�sr!�
file_namesr�rrr~�
fileobj_iters�tarfilesr&rr'r�s:


��rcCs�t�|�}t�|�}|�||�	t||�\}}}|sjdtd�|dt�|�td�|td�|df}t	j
ratj�
dt�|��}	tj|tjtjjd�t�td	�|	jjt�|	�f�|	�tj|tjjd�|�d
�}
|jrzt	jjrzt|
�|
S)aH
    Return plaintext fileobj from encrypted filename on backend

    If volume_info is set, the hash of the file will be checked,
    assuming some hash is available.  Also, if config.sign_key is
    set, a fatal error will be raised if file not signed by sign_key.

    with --ignore-errors set continue on hash mismatch

    z%s
 %s
 %s
 %s
z)Invalid data - %s hash mismatch for file:rzCalculated hash: %szManifest hash: %sr3zHash mismatch for: %s)�codez;IGNORED_ERROR: Warning: ignoring error as requested: %s: %s�rb)rr�r
r�r��restore_check_hashr?rror�
ignore_errors�	duplicity�errorsrrrF�ERRORr[�mismatched_hashrY�	__class__�__name__�uexcrn�filtered_open_with_deleter�r:r;�restore_add_sig_check)rv�filename�volume_info�parseresultsr}�verified�	hash_pair�calculated_hash�	error_msg�excr�r&r&r'r�!s4

�
�
�
r�cCs>|��}|rt�|d|�}||dkrd||fS	d||fS)z�
    Check the hash of vol_path path against data in volume_info

    @rtype: boolean
    @return: true (verified) / false (failed)
    rr3FT)�
get_best_hashrr�)rE�vol_pathrHrIr&r&r'r9Hs

r9cs<t�tj�r
t�jtj�sJ����fdd�}��|�dS)zo
    Require signature when closing fileobj matches sig in gpg_profile

    @rtype: void
    @return: void
    cs��j��}|durdn|}tjj}|durdn|}tt|�t|��}||d�||d�krGt�t	d�||d�||d�ftj
j�dSdS)z"Thunk run when closing volume fileN�Nonez#Volume was signed by key %s, not %s)r��
get_signaturerr:r;�minr,rrnr?r[�unsigned_volume)�
actual_sigr;�ofs�r�r&r'�check_signaturebs

���z.restore_add_sig_check.<locals>.check_signatureN)�
isinstancer
�
FileobjHookedr�r�GPGFile�addhook)r�rUr&rTr'rCXs��rCcCs�t�t|�tj�}d}d}|D]#\}}|st�|j�}|s#t�|j�}|�|tj	�s.|d7}|d7}qt
�td�t
dd|�|t
dd|�|f�|dkrRdadSdS)	z�
    Verify files, logging differences

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    rr3zVerify complete: %s, %s.z%d file comparedz%d files comparedz%d difference foundz%d differences foundN)r�
collate2itersrrr�r�ROPathr�compare_verbose�compare_datarr>r?�ngettext�exit_val)r��collated�
diff_count�total_count�
backup_ropath�current_pathr&r&r'�verifyqs:
�

�������rec	Cs�|��\}}||}|st�td��dSd�ttj|��}tj	rUt�
tddt|��d|�tj
sQ|j�|�|D]}z
tj�|���Wq;tyPYq;wdSdSt�
tddt|��d|dtd��dS)	z�
    Delete the extraneous files in the current backend

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    z6No extraneous files found, nothing deleted in cleanup.N�
z Deleting this file from backend:z"Deleting these files from backend:z#Found the following file to delete:z$Found the following files to delete:z?Run duplicity again with the --force option to actually delete.)�get_extraneousrrYr?r�rrror�forcer>r^r,r�rvr|r�r��	Exception)r��	ext_local�
ext_remote�
extraneous�filestr�fnr&r&r'�cleanup�sJ
���������
�rocCs(tjdusJ�|�tj�t_t|�dS)z�
    Remove backup files older than the last n full backups.

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    N)r�keep_chains�get_nth_last_full_backup_time�remove_time�
remove_oldr�r&r&r'�remove_all_but_n_full�s
rtcCs�tjdusJ�dd�}dd�}|�tj�}|r't�dtd�||�td�f�|jr:|jd	jtjkr:t�td
��|�tj�}tj	rLt
dd�|D��}|sWt�td
��dStjr�t�t
ddt|��d||��||�tj�7}|��|D]9}tj	r�t|tj�r�td�}ntd�}nt|tj�r�td�}ntd�}t�|t�|j��tjs�|jtj	d�qy|jdd�dSt�t
ddt|��d||�dtd��dS)z�
    Remove backup files older than config.remove_time from backend

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    NcS�d�dd�|D��S)z.Return string listing times of sets in setlistrfcSsg|]	}t�|����qSr&)rr�get_time�r$r3r&r&r'�
<listcomp>�sz5remove_old.<locals>.set_times_str.<locals>.<listcomp>�r�)�setlistr&r&r'�
set_times_str��z!remove_old.<locals>.set_times_strcSru)z2Return string listing times of chains in chainlistrfcSsg|]}t�|j��qSr&)rrr�rwr&r&r'rx��z7remove_old.<locals>.chain_times_str.<locals>.<listcomp>ry)�	chainlistr&r&r'�chain_times_str�r|z#remove_old.<locals>.chain_times_strz%s
%s
%sz#There are backup set(s) at time(s):z9Which can't be deleted because newer sets depend on them.r3z�Current active backup chain is older than specified time.  However, it will not be deleted.  To remove all your backups, manually purge the repository.css8�|]}t|tj�r
|jst|tj�r|jr|VqdSr�)rVr�SignatureChain�inclist�BackupChain�incset_list)r$�xr&r&r'r&�s�
��
��zremove_old.<locals>.<genexpr>z*No old backup sets found, nothing deleted.zDeleting backup chain at time:z Deleting backup chains at times:rfz5Deleting any incremental signature chain rooted at %sz2Deleting any incremental backup chain rooted at %sz$Deleting complete signature chain %sz!Deleting complete backup chain %s)�	keep_fullr�z-Found old backup chain at the following time:z/Found old backup chains at the following times:z5Rerun command with --force option to actually delete.)rrr�get_older_than_requiredrrYr?r�r��get_chains_older_than�!remove_all_inc_of_but_n_full_moder.r>rhr^r,�get_signature_chains_older_than�reverserVrr�rrr�r|r�)r�r{r�req_listr~�chain�
chain_descr&r&r'rs�sp
�����


�����
�rsc
s�d}tjptj}t�tjd|�jdd�}t�tjd|�jdd�}tj�	�}tj�	�}|j
d|d�d}|j
d|d�d}t|dd�d	�t|d
d�d	�|sWt�
td��dS|D]��zt	�fdd
�|D��d}Wntyud}Ynw|r�t	ttj|����ng}	���D]�}
t�|
�}|jp�|j|ks�q�z|	�|�t�td�|
f�Wq�ty�Ynw|jdkr�t�|j�t�|jp�|j�t�
td�|
f�tj�|
�}tj|jtjtj d�}
t!�"t�|
��}|j#dd�}t$�%||�|�&�|�&�tj�'||
�|�(�q�qY|j)|d�d}|j)|d�d}t|dd�d	�t|dd�d	�|D�]2�zt	�fdd
�|D��d}Wnt�yKd}Ynw|�rS|�*�ng}��*�D�]}|�+�|k�se�qYz|�|�t�td�|j,f�W�qYt�y�Ynw|jdk�r�t�|j�t�|�+��|�-�}tj|jdd�}t!�"t�|��}t.j/|j#dd�d�}t	|j0�1��D]c\}}
t�
td�|
f�t2tj|
|j3|�}tj|j|tjtj d�}
t!�"t�|
��}|j#dd�}t$�%||�|�&�|�&�tj�'||
�t4�4|j3|�}|�5dt6�7d|��|�8|�|�(��q�|j9�&�|j:dd�}tj|jdtjtj d�}t!�"t�|��}|j#dd�}t$�%||�|�&�|�&�tj�'||�|�(��qY�q-tj�&�tj�&�dS) z�
    Replicate backup files from one remote to another, possibly encrypting or adding parity.

    @rtype: void
    @return: void
    �	replicateNr�F)�local�filelistrcS�|jSr���
start_time�r�r&r&r'r�%�zreplicate.<locals>.<lambda>)�keycSr�r�r�r�r&r&r'r�&r�zNo old backup sets found.c�g|]
}|j�jkr|�qSr&r��r$r���	src_chainr&r'rx,�zreplicate.<locals>.<listcomp>zSignature %s already replicatedr�zReplicating %s.r��wb)�mode)�
filename_listcSr�r�r�r�r&r&r'r�Kr�cSr�r�r�r�r&r&r'r�Lr�cr�r&r�r�r�r&r'rxOr�zBackupset %s already replicatedr2Trr�r�r8)rr�r�);rrrr�r�CollectionsStatus�src_backendr�rvr.�get_signature_chains�sortedrr>r?�
IndexErrorrrr��
get_filenamesrlr��removerC�
ValueErrorr	r�r�r��get_fileobj_readr�r@r�r
r��
filtered_openr�copyfileobjr�rzr|�get_backup_chains�get_all_setsrv�remote_manifest_name�get_remote_manifestrr�r��itemsr�r��copyr�rr�r�r�rB)rIrl�	src_stats�	tgt_stats�src_list�tgt_list�
src_chainlist�
tgt_chainlist�	tgt_chain�tgt_sigs�src_sig_filename�src_sigr�rDr}�tmpobj�tgt_sets�src_set�rmf�mf_filename�mf_tdpr��ir��
mf_fileobj�mf_final_filename�mf_final_tdp�mf_final_fileobjr&r�r'r�s�

�

�

�
�
�

��
0r�cs�gd���fdd�}�fdd�}dd���fdd	��d
d�}��fdd
�}tj��}||�\}}}tj��}	||	�\}
}}t|
���}
t|���}g}g}|D]}||
vrd||vrd||�rd|�||�qO|
D]}||vsq||vrx|�|
|�qg|s�|s�t�t	d��dS|�
�|�
�tjs�t�t	d��|r�|s�|r�tdd�tj
_|D]}||�q�ttjd�r�tj�|�|D]}||�q����dS|r�t�t	d�dd�ttj|���|r�t�t	d�dd�ttj|���dSdS)z�
    Synchronize local archive manifest file and sig chains to remote archives.
    Copy missing files from remote to local as needed to make sure the local
    archive is synchronized to remote storage.

    @rtype: void
    @return: void
    )s.gs.gpgs.zs.gzs.partcs�tjdkrdStjdksJ�t�|�}z��tjptj�}Wntj	y)YdSw|j
dur:|jdur:|j}}n|j
}|j}||j
koI||jkS)z�Indicates if the metadata file should be synced.

        In full sync mode, or if there's a collection misbehavior, all files
        are needed.

        Otherwise, only the metadata for the target chain needs sync.
        r1Tr�N)
r�metadata_sync_moderr�r)rrr�r�CollectionsErrorr�r�rl)rD�parsed�target_chainr�r�r�r&r'�	is_needed�s$


��
�zsync_archive.<locals>.is_neededcs~i}i}d}|D]1}t�|�}|sq|jrd}|jdvs|jr9tj�|�\}}|�vr-|}|jr5|||<q|||<q|||fS)a]
        Return metafiles of interest from the file list.
        Files of interest are:
          sigtar - signature files
          manifest - signature files
          duplicity partial versions of the above
        Files excluded are:
          non-duplicity files

        @rtype: list
        @return: list of duplicity metadata files
        FTr�)	rr�r�r	rr7r�splitextr�)r��	metafiles�partials�need_passphrasern�pr�base�ext��suffixesr&r'�
get_metafiles�s$


�
z#sync_archive.<locals>.get_metafilescSsFt|d�}	z|��j}Wn	tyYnw|�|�q|��dS)z7
        Copy data from src_iter to file at fn
        r�TN)�open�__next__�datarO�writer�)�src_iterrD�filer�r&r&r'�copy_raw�s
�
�zsync_archive.<locals>.copy_rawcsHt�|�}tj�|�\}}|�vr|}t�d|j�}||}|||fS)zB
        @return: (parsedresult, local_name, remote_name)
        F)rr�r7rr��
get_suffixr)rnr�r�r��suffix�loc_namer�r&r'�resolve_basename�s

z&sync_archive.<locals>.resolve_basenamec
Ss�tj�|�j}t�td�t�|��z
t�	t
j|�WdStyA}zt�
td�t�|�t�|�f�WYd}~dSd}~ww)Nz1Deleting local %s (not authoritative at backend).zUnable to delete %s: %s)rr�r�r�rr>r?rro�ignore_missingr7�unlinkrirYrA)rn�del_namerr&r&r'�remove_local�s
����z"sync_archive.<locals>.remove_localcs�Gdd�dt��G�fdd�dt�}t�td�t�|���|�\}}}tj�|�}||�}t	�
t�|��}|j
rA�||j�n
tj||jtjd�|��|�tj�|��dS)z5
        Copy remote file fn to local cache.
        c@seZdZdZdd�ZdS)z2sync_archive.<locals>.copy_to_local.<locals>.Blockz;
            Data block to return from SrcIter
            cS�
||_dSr�)r�)�selfr�r&r&r'�__init__��
z;sync_archive.<locals>.copy_to_local.<locals>.Block.__init__N)r@�
__module__�__qualname__�__doc__r�r&r&r&r'�Block�sr�cs4eZdZdZdd�Z�fdd�Zdd�Zdd	�Zd
S)z4sync_archive.<locals>.copy_to_local.<locals>.SrcIterzG
            Iterate over source and return Block of data.
            cSr�r�rT)r�r�r&r&r'r�r�z=sync_archive.<locals>.copy_to_local.<locals>.SrcIter.__init__c	s�z�|j�|����}Wn1ty=t|jd�r&|jj}t|d�r%|j}nd}t�td�t	�
|�t��ftj
j�Ynw|jsH|j��t�|S)Nr�zFailed to read %s: %s)r��read�
get_read_sizerir-r�rrnr?rror �exc_infor[�genericr�r�rO)r��resr��r�r&r'r�	s&
�
���
z=sync_archive.<locals>.copy_to_local.<locals>.SrcIter.__next__cS�dS)Nir&�r�r&r&r'r��zBsync_archive.<locals>.copy_to_local.<locals>.SrcIter.get_read_sizecSr�)N�r&r�r&r&r'�
get_footerr�z?sync_archive.<locals>.copy_to_local.<locals>.SrcIter.get_footerN)r@r�r�r�r�r�r�r�r&r�r&r'�SrcItersr�zCopying %s to local cache.)rgN)rrr>r?rrorrvr�r
r�rr�rr�rr�r �maxsizer��mover�r�)rnr�r�r��rem_namer�r�r})r�r�r�r'�
copy_to_local�s!z#sync_archive.<locals>.copy_to_localz;Local and Remote metadata are synchronized, no sync needed.z/Synchronizing remote metadata to local cache...r3�syncr"z3Sync would copy the following from remote to local:rfz5Sync would remove the following spurious local files:N)rrvr.r��listdir�keysr�rr>r?�sortr�rNr:rEr-r"r�r�rrro)r�r�r�r�r��remlist�remote_metafiles�ignored�rem_needpass�loclist�local_metafiles�local_partials�loc_needpass�
local_keys�remote_keys�
local_missing�local_spuriousr�rnr&)r�r�r�r�r'�sync_archive�sf	 

<
��


��
�
��rcCs8|jsJ�|jd��}tjptjj}|j|d�dS)z�
    Check consistency and hostname/directory of last manifest

    @type col_stats: CollectionStatus object
    @param col_stats: collection status

    @rtype: void
    @return: void
    ���)�check_remoteN)r*�get_lastrr@r:rE�check_manifests)r��last_backup_setr
r&r&r'�check_last_manifestks

rc
Cs`|dvr�t����\}}t�|�tjj�|�tjj�dd��}zt�	|�}Wnt
y;t�t
d�tjj�Ynw|j|j}tjdtjtdtj�}||krdt�t
d�||ftjj�nt�t
d�||f�z
t�tj�\}}Wntjy�t�t
d	�tjj�Ynwtd
d�||fD��}	|	dkr�t�t
d
�|	ftjj�dSdSdS)z�
    Check for sufficient resources:
    - temp space for volume build
    - enough max open files
    Put out fatal error if not sufficient to run

    @type action: string
    @param action: action in progress

    @rtype: void
    @return: void
    )r1r2rN���z!Unable to get free space on temp.r3g333333�?z4Temp space has %d available, backup needs approx %d.z1Temp has %d available, backup will use approx %d.zUnable to get max open files.cSsg|]}|dkr|�qS)rr&)r$�lr&r&r'rx�r}z#check_resources.<locals>.<listcomp>iz_Max open files of %s is too low, should be >= 1024.
Use 'ulimit -n 1024' or higher to correct.
)r�default�mkstempr7r�r�sepr�r(�statvfsrirrnr?r[�get_freespace_failed�f_frsize�f_bavailrr�r��int�not_enough_freespacerC�resource�	getrlimit�
RLIMIT_NOFILE�error�get_ulimit_failedrP�maxopen_too_low)
rI�tempfile�tempname�tempfsrQ�	freespace�	needspace�soft�hard�maxopenr&r&r'�check_resources|sP

 
���
��
�
��
��� r+cCs�t�d|�t�dt|�dd�tjD�}t�dd�|�|�t�d�t���|�t�dtjp4tjtj	f|�t�d|�dS)	z4
    log Python, duplicity, and system versions
    zP================================================================================zduplicity %scss�|]}t�|�VqdSr�)rro)r$�argr&r&r'r&�s�z$log_startup_parms.<locals>.<genexpr>zArgs: %sr4rN)
rrFrr �argvr��platform�uname�
executable�version)�	verbosity�u_argsr&r&r'�log_startup_parms�sr4c@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)�Restartzo
    Class to aid in restart of inc or full backup.
    Instance in config.restart if restart in progress.
    cCs8d|_d|_d|_d|_d|_d|_||_|�|�dSr�)r	r�r�r�rUrVr��setParms�r�r�r&r&r'r��szRestart.__init__cCsD|jrd|_|j|_nd|_|j|_|j|_tt|�dd�|_dS)Nr1r2r3r)rlr	r�r��maxr,r�r7r&r&r'r6�s
zRestart.setParmscCs�t|j�}||jks|r|jsz|jdkr.t�td��|j��t�	t
jdt
jtj�dS||jdkr[t�td�|jd||jdf�t
|jd|d�D]}|�|�qQdSt�td�||jf�|j��t�	t
jdt
jtj�dSdS)NrzRESTART: The first volume failed to upload before termination.
         Restart is impossible...starting backup from beginning.zgRESTART: Volumes %d to %d failed to upload before termination.
         Restarting backup at volume %d.r3z�RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.
         Restart is impossible ... duplicity will clean off the last partial
         backup then restart the backup from the beginning.)r,r�r�rr>r?r�r|r7�execver r-r8r�del_volume_info)r�r��mf_len�volr&r&r'r��s&



��
�
�zRestart.checkManifestcCs$|j|j}|j|_|jpd|_dS)Nr)r�r�rdrUrerV)r�r�r�r&r&r'r��szRestart.setLastSavedN)r@r�r�r�r�r6r�r�r&r&r&r'r5�s

r5cCs�dtjvrt�td�tjj�t��dkr#t�t���t�	t�
��t��t
�tjdd��}tj�tjjd�t_tj�tj�t_t�td�tj�tjjdd	�set�d
tjj�t��t�d�zt |�Wt!�"�dSt!�"�w)z
    Start/end here
    �PYTHONOPTIMIZEz�
PYTHONOPTIMIZE in the environment causes duplicity to fail to
recognize its own backups.  Please remove PYTHONOPTIMIZE from
the environment and rerun the backup.

See https://bugs.launchpad.net/duplicity/+bug/931175
rr3NslockfilezAcquiring lockfile %sF)�blockingzJAnother duplicity instance is already running with this archive directory
r)#r7r8rrnr?r[�pythonoptimize_set�geteuid�setuid�setgid�getegidrr�r
�ProcessCommandLiner r-rr�rr�r��lockpath�	fasteners�process_lock�InterProcessLock�lockfile�Debug�acquire�
user_error�shutdown�exit�	do_backupr�release_lockfile)rIr&r&r'�main�s,

�
�

rQcCsztjr
t�tj�nt��ttj�t|�t�	tj
tj|���}|dvr*t
|�	|dvr�|��}|s6nS|��}|jr�|dvrnt|�t_tjj}|dkrVt�tjj�nt�tjj�t�tjj�t�td|��nt�td|��|��t�	tj
tj|���}q*n	|��}|dkr�t�td	�d
t�|��nt�td��tjs�|dkr�tjdur�|tjkr�t�td
��d}t�|�td|�tj _!|dkr�t"|�n�|dkr�t#|�n�|dkr�t$|�n�|dkr�tj%s�t�|d�n�t�&|tj%d�n�|dk�r	t'|�n�|dk�rt(|�n�|dk�s|dk�r"t)|�n�|dk�r,t
|�nz|dk�r5t*�nq|dk�sC|dk�sCJ|��tj j+�rPtd|d�tj _,tj j-�sytj j.�sytd|�tj _!tj j,�rytj j!tj j,k�ryt�/td�tj0j1�|dk�r�t2|�n#t3|�}|�s�t2|�ntj�s�|j4�r�td|�tj _!t5|�t6|�tj
�7�t�8�t9du�r�t:�;t9�dSdS)N)r,r.r/r0r�T)r1r2ro)r1r2r1z.Last %s backup left a partial set, restarting.z7Cleaning up previous partial %s backup set, restarting.rzLast full backup date:r4zLast full backup date: noner2z0Last full backup is too old, forcing full backupr3rrer-r,ror0r.r/r�r�rz]When using symmetric encryption, the signing passphrase must equal the encryption passphrase.)<r�current_timerr�r4rr
r+rr�rvr�r�r�get_last_backup_chainrr�r5rBr	rlr�r�r�r>r?r|�get_last_full_backup_timer�full_force_time�PrintCollectionStatusrNr:rErrer�file_changed� PrintCollectionFileChangedStatusrorsrtr�r;rDr<r=rnr[rLr�r�r*rr�r�rMr_r rN)rIr��last_full_chainr��last_full_timer�r&r&r'rO&s�
��
��














��




�rO)F)F�
__future__r�futurer�install_aliases�builtinsrrrrr�rFr7r.rr rlr;rr	r
rrr
rrrrrrrrrrr�duplicity.errorsr�duplicity.configrr_r(rNrTr_r�r�r�r�r�r�r�rrrr�r9rCrerortrsr�rrr+r
r4r5rQrOr&r&r&r'�<module>s�

,S:
?7'"#Mtd3
<.

https://t.me/RX1948 - 2025