https://t.me/RX1948
Server : Apache
System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64
User : dh_edsupp ( 6597262)
PHP Version : 8.2.26
Disable Function : NONE
Directory :  /lib/python3/dist-packages/pygments/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/pygments/__pycache__/lexer.cpython-310.pyc
o

���at|�@s�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
ddlmZmZmZmZmZmZddlmZgd�Zgd	�Zed
d��ZGdd
�d
e�ZGdd�ded�ZGdd�de�ZGdd�de�Z Gdd�d�Z!e!�Z"Gdd�de#�Z$Gdd�d�Z%dd�Z&Gdd�d�Z'e'�Z(dd �Z)Gd!d"�d"�Z*Gd#d$�d$e�Z+Gd%d&�d&e�Z,Gd'd(�d(ee,d�Z-Gd)d*�d*�Z.Gd+d,�d,e-�Z/d-d.�Z0Gd/d0�d0e,�Z1Gd1d2�d2e-e1d�Z2dS)3z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16becC�dS)N����xrr�0/usr/lib/python3/dist-packages/pygments/lexer.py�<lambda>!�r#c@�eZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|vrt|d�|d<t�||||�S)N�analyse_text)r�type�__new__)�mcs�name�bases�drrr"r)*szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r)rrrr"r&$sr&c@sZeZdZdZdZgZgZgZgZdZ	dd�Z
dd�Zdd	�Zd
d�Z
dd
d�Zdd�ZdS)ra�
    Lexer for a specific language.

    Basic options recognized:
    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|�dd	�|_|�d
�p-|j|_g|_	t
|dd�D]}|�|�q8dS)
N�stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr)�optionsr	r2r3r4r
r5�getr6r9r�
add_filter)�selfr:�filter_rrr"�__init__^s�zLexer.__init__cCs$|jrd|jj|jfSd|jjS)Nz<pygments.lexers.%s with %r>z<pygments.lexers.%s>)r:�	__class__r.�r=rrr"�__repr__js
�zLexer.__repr__cKs*t|t�s
t|fi|��}|j�|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr9�append)r=r>r:rrr"r<qs
zLexer.add_filtercCr)a~
        Has to return a float between ``0`` and ``1`` that indicates
        if a lexer wants to highlight this text. Used by ``guess_lexer``.
        If this method returns ``0`` it won't highlight it in any case, if
        it returns ``1`` highlighting with this lexer is guaranteed.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr)�textrrr"r'yr$zLexer.analyse_textFc
s�t�t�sy�jdkrt��\�}nu�jdkrezddl}Wnty-}ztd�|�d}~wwd}tD]\}}��|�rI�t|�d��	|d�}nq2|durb|�
�dd��}	��	|	�d�p_d	d�}|�n!��	�j����d
�rx�td
�d��n
��d
�r��td
�d����dd����d
d���j
r�����n�jr���d���jdkr����j���jr���d�s��d7���fdd�}
|
�}|s�t|�j��}|S)a=
        Return an iterable of (tokentype, value) pairs generated from
        `text`. If `unfiltered` is set to `True`, the filtering mechanism
        is bypassed even if filters are defined.

        Also preprocess the text, i.e. expand tabs and strip it if
        wanted and applies registered filters.
        r7�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir6ruz
�
�
c3s&�����D]
\}}}||fVqdS�N)�get_tokens_unprocessed)�_�t�v�r=rErr"�streamer�s��z"Lexer.get_tokens.<locals>.streamer)rC�strr6rrF�ImportError�
_encoding_map�
startswith�len�decode�detectr;rGr3�stripr2r5�
expandtabsr4�endswithrr9)r=rE�
unfilteredrLrF�e�decoded�bomr6�encrP�streamrrOr"�
get_tokens�sZ
	

���
��
�



zLexer.get_tokenscCst�)z�
        Return an iterable of (index, tokentype, value) pairs where "index"
        is the starting position of the token within the input text.

        In subclasses, implement this method as a generator to
        maximize effectiveness.
        )�NotImplementedErrorrOrrr"rK�szLexer.get_tokens_unprocessed)F)r.r/r0r1r+�aliases�	filenames�alias_filenames�	mimetypes�priorityr?rBr<r'rarKrrrr"r0s
;r)�	metaclassc@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs<|di|��|_|di|��|_||_tj|fi|��dS�Nr)�
root_lexer�language_lexer�needlerr?)r=�_root_lexer�_language_lexer�_needler:rrr"r?�szDelegatingLexer.__init__cCs�d}g}g}|j�|�D]$\}}}||jur(|r#|�t|�|f�g}||7}q|�|||f�q|r<|�t|�|f�t||j�|��S)N�)rkrKrlrDrU�
do_insertionsrj)r=rE�buffered�
insertions�
lng_buffer�irMrNrrr"rK�s


�z&DelegatingLexer.get_tokens_unprocessedN)r.r/r0r1rr?rKrrrr"r�s	rc@�eZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r.r/r0r1rrrr"r�src@r%)�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCr)NrrrArrr"rB��z_inherit.__repr__N)r.r/r0r1rBrrrr"rx�srxc@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGst�||�SrJ)�tupler))�cls�argsrrr"r)
szcombined.__new__cGsdSrJr)r=r}rrr"r?
szcombined.__init__N)r.r/r0r1r)r?rrrr"rzsrzc@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dSrJ)�_text�_start)r=�startrErrr"r?s
z_PseudoMatch.__init__NcCs|jSrJ)r��r=�argrrr"r�sz_PseudoMatch.startcCs|jt|j�SrJ)r�rUrr�rrr"�endsz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorrr�rrr"�group!sz_PseudoMatch.groupcCs|jfSrJ)rrArrr"�groups&sz_PseudoMatch.groupscCsiSrJrrArrr"�	groupdict)ryz_PseudoMatch.groupdictrJ)
r.r/r0r1r?r�r�r�r�r�rrrr"r~s


r~csd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s��t��D]O\}}|durqt|�tur)|�|d�}|r(|�|d�||fVq|�|d�}|durT|r>|�|d�|_||t|�|d�|�|�D]}|rS|VqLq|r^|��|_dSdS)N�)�	enumerater(rr�r��posr~r�)�lexer�match�ctxru�action�data�item�r}rr"�callback1s,�����zbygroups.<locals>.callbackrJr)r}r�rr�r"r-src@rv)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nrwrrrr"r�Gsr�csli�d�vr��d�}t|ttf�r|�d<nd|f�d<�tur+d��fdd�	}|Sd���fdd�	}|S)	a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sx��r��|j�|jdi���}n|}|��}|j|��fi���D]
\}}}||||fVq#|r:|��|_dSdSri)�updater:r@r�rKr�r�r��r�r�r��lx�srurMrN)�	gt_kwargs�kwargsrr"r�fs� �zusing.<locals>.callbackc3sl���|j��di���}|��}|j|��fi���D]
\}}}||||fVq|r4|��|_dSdSri)r�r:r�rKr�r�r�r���_otherr�r�rr"r�us� �rJ)�poprC�listr{r)r�r�r�r�rr�r"rPs


�
rc@r%)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dSrJ)r�)r=r�rrr"r?�s
zdefault.__init__N)r.r/r0r1r?rrrr"r�src@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    rpcCs||_||_||_dSrJ)r�prefix�suffix)r=rr�r�rrr"r?�s
zwords.__init__cCst|j|j|jd�S)N�r�r�)rrr�r�rArrr"r;�sz	words.getN)rprp)r.r/r0r1r?r;rrrr"r�s
rc@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r	|��}t�||�jS)zBPreprocess the regular expression component of a token definition.)rCr
r;�re�compiler�)r|�regex�rflagsr�rrr"�_process_regex�s
zRegexLexerMeta._process_regexcCs&t|�tust|�sJd|f��|S)z5Preprocess the token component of a token definition.z2token type must be simple type or callable, not %r)r(r�callable)r|�tokenrrr"�_process_token�s�zRegexLexerMeta._process_tokencCst|t�r/|dkrdS||vr|fS|dkr|S|dd�dkr)t|dd��SJd|��t|t�rbd	|j}|jd
7_g}|D]}||ksPJd|��|�|�|||��qD|||<|fSt|t�r||D]}||vsy|dvsyJd
|��qi|SJd|��)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state %rz_tmp_%dr�zcircular state ref %r)r�r�zunknown new state zunknown new state def %r)rCrQ�intrz�_tmpname�extend�_process_stater{)r|�	new_state�unprocessed�	processed�	tmp_state�itokens�istaterrr"�_process_new_state�s<



�

��z!RegexLexerMeta._process_new_statecCs�t|�tusJd|��|ddksJd|��||vr ||Sg}||<|j}||D]�}t|t�rK||ks>Jd|��|�|�||t|���q-t|t�rQq-t|t�rk|�	|j
||�}|�t�
d�jd|f�q-t|�tuswJd|��z|�|d||�}Wnty�}	z
td	|d|||	f�|	�d}	~	ww|�|d
�}
t|�dkr�d}n	|�	|d||�}|�||
|f�q-|S)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rrpNzwrong rule def %rz+uncompilable regex %r in state %r of %r: %sr��)r(rQ�flagsrCrr�r�rxrr�r�rDr�r�r�r{r��	Exception�
ValueErrorr�rU)r|r�r�r��tokensr��tdefr��rex�errr�rrr"r��sL
�

����
�zRegexLexerMeta._process_stateNcCs<i}|j|<|p
|j|}t|�D]	}|�|||�q|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r|r+�	tokendefsr�r�rrr"�process_tokendef�s
zRegexLexerMeta.process_tokendefc

Cs�i}i}|jD]_}|j�di�}|��D]Q\}}|�|�}|dur;|||<z|�t�}Wn	ty5Yqw|||<q|�|d�}|durFq||||d�<z|�t�}	Wn	ty^Yqw||	||<qq|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__r;�items�indexrr�r�)
r|r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxrrr"�
get_tokendefss6

���zRegexLexerMeta.get_tokendefscOsRd|jvri|_d|_t|d�r|jrn	|�d|���|_tj	|g|�Ri|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsrp)
r�r�r��hasattrr�r�r�r�r(�__call__)r|r}�kwdsrrr"r�8s
zRegexLexerMeta.__call__rJ)r.r/r0r1r�r�r�r�r�r�r�rrrr"r��s#
,1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�ccs��d}|j}t|�}||d}	|D]�\}}}	|||�}
|
r�|dur:t|�tur2|||
��fVn|||
�EdH|
��}|	dur�t|	t�rm|	D]"}|dkrZt|�dkrY|�	�qI|dkrf|�
|d�qI|�
|�qIn+t|	t�r�t|	�t|�kr�|dd�=n||	d�=n|	dkr�|�
|d�nJd|	��||d}n3qz'||d	kr�d
g}|d
}|t
d	fV|d7}Wq|t||fV|d7}Wn
ty�YdSwq)z}
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the inital stack (default: ``['root']``)
        rr�r�Nr�r�F�wrong state def: %rrHr�)r�r�r(rr�r�rCr{rUr�rDr��absrrr�)r=rEr�r�r��
statestack�statetokens�rexmatchr�r��mr�rrr"rKis`�

��
�#��z!RegexLexer.get_tokens_unprocessedN�r�)	r.r/r0r1r��	MULTILINEr�r�rKrrrr"rFs
rc@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|pdg|_dS)Nr�)rEr�rUr�r�)r=rEr�r�r�rrr"r?�szLexerContext.__init__cCsd|j|j|jfS)NzLexerContext(%r, %r, %r))rEr�r�rArrr"rB�s�zLexerContext.__repr__�NN)r.r/r0r1r?rBrrrr"r�s
rc@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs�|j}|st|d�}|d}n|}||jd}|j}	|D]�\}}}|||j|j�}	|	r�|durYt|�turG|j||	��fV|	��|_n|||	|�EdH|sY||jd}|dur�t	|t
�r�|D]'}
|
dkrwt|j�dkrv|j��qd|
dkr�|j�
|jd�qd|j�
|
�qdn0t	|t�r�t|�t|j�kr�|jdd�=n|j|d�=n|dkr�|j�
|jd�nJd	|��||jd}nHqz;|j|jkr�WdS||jd
kr�dg|_|d}|jtd
fV|jd7_Wq|jt||jfV|jd7_Wnt�y
YdSwq)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�Fr�rH)r�rr�rEr�r�r(rr�rCr{rUr�rDr�r�r�rrr�)r=rE�contextr�r�r�r�r�r�r�r�rrr"rK�sn�



��
�#��z)ExtendedRegexLexer.get_tokens_unprocessedr�)r.r/r0r1rKrrrr"r�src	cs��t|�}zt|�\}}Wnty|EdHYdSwd}d}|D]{\}}}|dur.|}d}	|r�|t|�|kr�||	||�}
|
rP|||
fV|t|
�7}|D]\}}}
|||
fV|t|
�7}qR||}	zt|�\}}Wnty{d}Ynw|r�|t|�|ks:|	t|�kr�||||	d�fV|t|�|	7}q#|r�|p�d}|D]\}}}|||fV|t|�7}q�zt|�\}}Wnty�d}YdSw|s�dSdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrU)rsr�r�r��realpos�insleftrurMrN�oldi�tmpval�it_index�it_token�it_value�prrr"rq�s\�
������rqc@r%)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�rt|j|j|jd��n|�t��|��tjf����fdd�	}|S)Nr�cs`�jd���fddg�}t��}��|||�}t��}|dd7<|d||7<|S)Nr�rrr�)�
_prof_data�
setdefault�timer�)rEr��endpos�info�t0�res�t1�r|�compiledr�r�rr"�
match_funcJsz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	rCrrr�r�r�r��sys�maxsize)r|r�r�r�r�rr�r"r�Bs

�z&ProfilingRegexLexerMeta._process_regexN)r.r/r0r1r�rrrr"r�?sr�c@s"eZdZdZgZdZddd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�r�c#s���jj�i�t��||�EdH�jj��}tdd�|��D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	�t	d
d�t	d�|D]}t	d
|�qSt	d	�dS)NcssP�|]#\\}}\}}|t|��d��dd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrXrG)�.0r��r�nrMrrr"�	<genexpr>`s���z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jSrJ)�_prof_sort_indexr rArr"r#cs
z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss�|]}|dVqdS)�Nr)rr!rrr"res�z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)r@r�rDrrKr��sortedr��sum�printr.rU)r=rEr��rawdatar��	sum_totalr-rrAr"rK[s*��
��z*ProfilingRegexLexer.get_tokens_unprocessedNr�)r.r/r0r1r�rrKrrrr"r�Us
r�)3r1r�r�r��pygments.filterrr�pygments.filtersr�pygments.tokenrrrr�
pygments.utilr	r
rrr
r�pygments.regexoptr�__all__rS�staticmethod�_default_analyser(r&rrrQrrxrr{rzr~rr�rrrrr�rrrrqr�r�rrrr"�<module>sF
 '
2)aH@

https://t.me/RX1948 - 2025