https://t.me/RX1948
Server : Apache
System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64
User : dh_edsupp ( 6597262)
PHP Version : 8.2.26
Disable Function : NONE
Directory :  /lib/python3/dist-packages/boto/glacier/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/boto/glacier/__pycache__/writer.cpython-310.pyc
o

ckF[�%�@stddlZddlmZmZmZddlmZdZGdd�de�ZGdd�de�Z	d	d
�Z
efdd�ZGd
d�de�ZdS)�N)�chunk_hashes�	tree_hash�bytes_to_hex)�compute_hashes_from_fileobjic@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)�_Partitionera�Convert variable-size writes into part-sized writes

    Call write(data) with variable sized data as needed to write all data. Call
    flush() after all data is written.

    This instance will call send_fn(part_data) as needed in part_size pieces,
    except for the final part which may be shorter than part_size. Make sure to
    call flush() to ensure that a short final part results in a final send_fn
    call.

    cCs||_||_g|_d|_dS�Nr)�	part_size�send_fn�_buffer�_buffer_size)�selfrr	�r
�5/usr/lib/python3/dist-packages/boto/glacier/writer.py�__init__1s
z_Partitioner.__init__cCsR|dkrdS|j�|�|jt|�7_|j|jkr'|��|j|jksdSdS)N�)r
�appendr�lenr�
_send_part�r�datar
r
r�write7s�z_Partitioner.writecCsfd�|j�}t|�|jkr||jd�g|_t|jd�|_ng|_d|_|d|j�}|�|�dS)Nrr)�joinr
rrrr	)rr�partr
r
rr?sz_Partitioner._send_partcCs|jdkr|��dSdSr)rr�rr
r
r�flushMs
�z_Partitioner.flushN)�__name__�
__module__�__qualname__�__doc__rrrrr
r
r
rr%src@s<eZdZdZefdd�Zdd�Zdd�Zdd	�Zd
d�Z	dS)
�	_Uploaderz�Upload to a Glacier upload_id.

    Call upload_part for each part (in any order) and then close to complete
    the upload.

    cCs4||_||_||_||_d|_d|_g|_d|_dS)NrF)�vault�	upload_idr�
chunk_size�
archive_id�_uploaded_size�_tree_hashes�closed�rr r!rr"r
r
rrYs
z_Uploader.__init__cCs:t|j�}||kr|j�dg||d�||j|<dS�N�)rr%�extend)r�index�
raw_tree_hash�list_lengthr
r
r�_insert_tree_hashes
z_Uploader._insert_tree_hashc	Cs�|jrtd��tt||j��}|�||�t|�}t�|��	�}|j
|}||t|�df}|jj
�|jj|j||||�}|��|jt|�7_dS)z�Upload a part to Glacier.

        :param part_index: part number where 0 is the first part
        :param part_data: data to upload corresponding to this part

        �I/O operation on closed filer)N)r&�
ValueErrorrrr"r.r�hashlib�sha256�	hexdigestrrr �layer1�upload_part�namer!�readr$)	r�
part_index�	part_data�part_tree_hash�
hex_tree_hash�linear_hash�start�
content_range�responser
r
rr5ks$
��z_Uploader.upload_partcCs,|jrtd��|�||�|j|7_dS)a�Skip uploading of a part.

        The final close call needs to calculate the tree hash and total size
        of all uploaded data, so this is the mechanism for resume
        functionality to provide it without actually uploading the data again.

        :param part_index: part number where 0 is the first part
        :param part_tree_hash: binary tree_hash of part being skipped
        :param part_length: length of part being skipped

        r/N)r&r0r.r$)rr8r:�part_lengthr
r
r�	skip_part�sz_Uploader.skip_partcCsZ|jrdSd|jvrtd��tt|j��}|jj�|jj|j	||j
�}|d|_d|_dS)NzSome parts were not uploaded.�	ArchiveIdT)r&r%�RuntimeErrorrrr r4�complete_multipart_uploadr6r!r$r#)rr;r?r
r
r�close�s
�

z_Uploader.closeN)
rrrr�
_ONE_MEGABYTErr.r5rArEr
r
r
rrRsrccs2�|�|�}|r|�d�V|�|�}|sdSdS)Nzutf-8)r7�encode)�fobjrrr
r
r�generate_parts_from_fobj�s�

�rIc
Csvt||||�}tt||��D]%\}}tt||��}	||vs#|||	kr*|�||�q|�||	t|��q|��|j	S)a�Resume upload of a file already part-uploaded to Glacier.

    The resumption of an upload where the part-uploaded section is empty is a
    valid degenerate case that this function can handle. In this case,
    part_hash_map should be an empty dict.

    :param vault: boto.glacier.vault.Vault object.
    :param upload_id: existing Glacier upload id of upload being resumed.
    :param part_size: part size of existing upload.
    :param fobj: file object containing local data to resume. This must read
        from the start of the entire upload, not just from the point being
        resumed. Use fobj.seek(0) to achieve this if necessary.
    :param part_hash_map: {part_index: part_tree_hash, ...} of data already
        uploaded. Each supplied part_tree_hash will be verified and the part
        re-uploaded if there is a mismatch.
    :param chunk_size: chunk size of tree hash calculation. This must be
        1 MiB for Amazon.

    )
r�	enumeraterIrrr5rArrEr#)
r r!rrH�
part_hash_mapr"�uploaderr8r9r:r
r
r�resume_file_upload�s�rMc@sleZdZdZefdd�Zdd�Zdd�Zdd	�Zd
d�Z	e
dd
��Ze
dd��Ze
dd��Z
e
dd��ZdS)�Writerz�
    Presents a file-like object for writing to a Amazon Glacier
    Archive. The data is written using the multi-part upload API.
    cCs.t||||�|_t||j�|_d|_d|_dS)NFr)rrLr�_upload_part�partitionerr&�next_part_indexr'r
r
rr�s
zWriter.__init__cCs|jrtd��|j�|�dS)Nr/)r&r0rPrrr
r
rr�szWriter.writecCs"|j�|j|�|jd7_dSr()rLr5rQ)rr9r
r
rrO�szWriter._upload_partcCs(|jrdS|j��|j��d|_dS)NT)r&rPrrLrErr
r
rrE�s



zWriter.closecCs|��|jjS�N)rErLr#rr
r
r�get_archive_id�szWriter.get_archive_idcCst|jj�S)z�
        Returns the current tree hash for the data that's been written
        **so far**.

        Only once the writing is complete is the final tree hash returned.
        )rrLr%rr
r
r�current_tree_hash�szWriter.current_tree_hashcC�|jjS)z�
        Returns the current uploaded size for the data that's been written
        **so far**.

        Only once the writing is complete is the final uploaded size returned.
        )rLr$rr
r
r�current_uploaded_size�szWriter.current_uploaded_sizecCrUrR)rLr!rr
r
rr!�zWriter.upload_idcCrUrR)rLr rr
r
rr rWzWriter.vaultN)rrrrrFrrrOrErS�propertyrTrVr!r r
r
r
rrN�s
	
	
rN)
r1�boto.glacier.utilsrrrrrF�objectrrrIrMrNr
r
r
r�<module>s-R
�"

https://t.me/RX1948 - 2025