Server : Apache System : Linux iad1-shared-b8-43 6.6.49-grsec-jammy+ #10 SMP Thu Sep 12 23:23:08 UTC 2024 x86_64 User : dh_edsupp ( 6597262) PHP Version : 8.2.26 Disable Function : NONE Directory : /lib/python3/dist-packages/mercurial/__pycache__/ |
Upload File : |
o �]LbH � @ s� d Z ddlmZ ddlZddlZddlmZ ddlmZ ddl m Z mZmZ ddd �Z ddd�ZG d d� de�ZeZejdded�Z ddd�ZdS )a; Algorithm works in the following way. You have two repository: local and remote. They both contains a DAG of changelists. The goal of the discovery protocol is to find one set of node *common*, the set of nodes shared by local and remote. One of the issue with the original protocol was latency, it could potentially require lots of roundtrips to discover that the local repo was a subset of remote (which is a very common case, you usually have few changes compared to upstream, while upstream probably had lots of development). The new protocol only requires one interface for the remote repo: `known()`, which given a set of changelists tells you if they are present in the DAG. The algorithm then works as follow: - We will be using three sets, `common`, `missing`, `unknown`. Originally all nodes are in `unknown`. - Take a sample from `unknown`, call `remote.known(sample)` - For each node that remote knows, move it and all its ancestors to `common` - For each node that remote doesn't know, move it and all its descendants to `missing` - Iterate until `unknown` is empty There are a couple optimizations, first is instead of starting with a random sample of missing, start by sending all heads, in the case where the local repo is a subset, you computed the answer in one round trip. Then you can do something similar to the bisecting strategy used when finding faulty changesets. Instead of random samples, you can try picking nodes that will maximize the number of nodes that will be classified with it (since all ancestors or descendants will be marked as well). � )�absolute_importN� )�_��nullrev)�error�policy�utilc C s� i }t �|�}t� }d}|r_|�� } | |v rq|�| d�} | |kr%|d9 }| |kr8|�| � |r8t|�|kr8dS |�| � || �D ]}|tkrZ| rM|| v rZ|�|| d � |�|� qA|sdS dS )a[ update an existing sample to match the expected size The sample is updated with revs exponentially distant from each head of the <revs> set. (H~1, H~2, H~4, H~8, etc). If a target size is specified, the sampling will stop once this size is reached. Otherwise sampling will happen until roots of the <revs> set are reached. :revs: set of revs we want to discover (if None, assume the whole dag) :heads: set of DAG head revs :sample: a sample to update :parentfn: a callable to resolve parents for a revision :quicksamplesize: optional target size of the sampler � N) �collections�deque�set�popleft� setdefault�add�lenr �append)�revs�heads�sample�parentfn�quicksamplesize�dist�visit�seen�factor�curr�d�p� r �8/usr/lib/python3/dist-packages/mercurial/setdiscovery.py� _updatesample9 s, ��r! Tc C sD t | �|kr| S |rtt�| |��S t| �} | �� t| d|� �S )z�return a random subset of sample of at most desiredlen item. If randomize is False, though, a deterministic subset is returned. This is meant for integration tests. N)r r �randomr �list�sort)r � desiredlen� randomizer r r �_limitsample_ s r'