https://t.me/RX1948
Server : Apache
System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64
User : dh_edsupp ( 6597262)
PHP Version : 8.2.26
Disable Function : NONE
Directory :  /lib/python3/dist-packages/boto/glacier/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/boto/glacier/__pycache__/utils.cpython-310.pyc
o

ckF[��@s�ddlZddlZddlZddlmZdZdeZdZefdd�Zefdd	�Z	d
d�Z
ddd
�Zdd�Zdd�Z
Gdd�de�ZdS)�N)�six���'cCspt}|t|kr4|dtdkrtd|��|d}d}||kr.t�t|�}|d7}||ks t|�}|S|}|S)a"Calculate the minimum part size needed for a multipart upload.

    Glacier allows a maximum of 10,000 parts per upload.  It also
    states that the maximum archive size is 10,000 * 4 GB, which means
    the part size can range from 1MB to 4GB (provided it is one 1MB
    multiplied by a power of 2).

    This function will compute what the minimum part size must be in
    order to upload a file of size ``size_in_bytes``.

    It will first check if ``default_part_size`` is sufficient for
    a part size given the ``size_in_bytes``.  If this is not the case,
    then the smallest part size than can accomodate a file of size
    ``size_in_bytes`` will be returned.

    If the file size is greater than the maximum allowed archive
    size of 10,000 * 4GB, a ``ValueError`` will be raised.

    irzFile size too large: %s��)�	_MEGABYTE�MAXIMUM_NUMBER_OF_PARTS�
ValueError�math�ldexp�int)�
size_in_bytes�default_part_size�	part_size�
min_part_size�power�r�4/usr/lib/python3/dist-packages/boto/glacier/utils.py�minimum_part_size"s��rcCsttt�t|�t|���}g}t|�D]}||}|d|}|�t�|||���	��q|s8t�d��	�gS|S)Nr�)
r
r�ceil�len�float�range�append�hashlib�sha256�digest)�
bytestring�
chunk_size�chunk_count�hashes�i�start�endrrr�chunk_hashesJsr&cCs�g}|�|�t|�dkrKg}	t|�dkr-|�d�}|�d�}|�t�||����nt|�dkr>|�d�}|�|�nnq|�|�t|�dks
|dS)z�
    Given a hash of each 1MB chunk (from chunk_hashes) this will hash
    together adjacent hashes until it ends up with one big one. So a
    tree of hashes.
    rTr)�extendr�poprrrr)�for"�
new_hashes�first�second�onlyrrr�	tree_hashVs"



�

�
r.cCs�tjrt|d�rd|jvrtd��t��}g}|�|�}|rDt|t	�s.|�
t|dd�p,d�}|�|�|�
t�|����|�|�}|s|sNt�d���g}|��tt|��fS)a�Compute the linear and tree hash from a fileobj.

    This function will compute the linear/tree hash of a fileobj
    in a single pass through the fileobj.

    :param fileobj: A file like object.

    :param chunk_size: The size of the chunks to use for the tree
        hash.  This is also the buffer size used to read from
        `fileobj`.

    :rtype: tuple
    :return: A tuple of (linear_hash, tree_hash).  Both hashes
        are returned in hex.

    �mode�bz/File-like object must be opened in binary mode!�encoding�zutf-8r)r�PY3�hasattrr/r
rr�read�
isinstance�bytes�encode�getattr�updaterr�	hexdigest�bytes_to_hexr.)�fileobjr �linear_hash�chunks�chunkrrr�compute_hashes_from_fileobjns



�	rAcCs
t�|�S�N)�binascii�hexlify��str_as_bytesrrrr<�s
r<cCsttt|���S)z�

    :type str_as_bytes: str
    :param str_as_bytes: The string for which to compute the tree hash.

    :rtype: str
    :return: The computed tree hash, returned as hex.

    )r<r.r&rErrr�tree_hash_from_str�s
rGc@seZdZdd�Zdd�ZdS)�ResettingFileSendercCs||_|��|_dSrB)�_archive�tell�_starting_offset)�self�archiverrr�__init__�szResettingFileSender.__init__c	Cs<z|�|||j|�|��W|j�|j�S|j�|j�wrB)�requestrI�getresponse�seekrK)rL�
connection�method�path�body�headersrrr�__call__�s zResettingFileSender.__call__N)�__name__�
__module__�__qualname__rNrWrrrrrH�srH)r)rrrC�boto.compatrr�DEFAULT_PART_SIZEr	rr&r.rAr<rG�objectrHrrrr�<module>s(
&

https://t.me/RX1948 - 2025