如何从笛卡尔积中采样而不重复

问题描述 投票:0回答:7

我有一个集合列表,我希望对 n 个不同的样本进行采样,每个样本包含每个集合中的一个项目。 我不想要的是按顺序排列,因此,例如,我将从第一组中获得必须具有相同项目的所有样本。我也不想创建所有笛卡尔积,因为这在效率方面可能是不可能的...... 知道如何做吗?或者甚至是近似这种行为的东西?

不起作用的示例:

(prod for i, prod in zip(range(n), itertools.product(*list_of_sets)))
python random cartesian-product
7个回答
9
投票

上述所有解决方案在迭代结束时都浪费了大量资源来过滤重复结果。这就是为什么我想到了一种从开始到结束具有(几乎)线性速度的方法。

这个想法是:(仅在你的头脑中)给标准阶笛卡尔积的每个结果一个索引。例如,对于

A
x
B
x
C
2000
x
1
x
2
=
4000
元素:

0: (A[0], B[0], C[0])
1: (A[1], B[0], C[0])
...
1999: (A[1999], B[0], C[0])
2000: (A[0], B[0], C[1])
...
3999: (A[1999], B[0], C[1])
done.

所以还有一些问题需要解决:

  • 如何获得可能的索引列表? 答案:只需乘以
    2000*1*2=4000
    ,下面的每个数字都是有效的索引。
  • 如何顺序生成随机索引而不重复? 有两个答案:如果您想要已知样本大小的样本
    n
    ,只需使用
    random.sample(xrange(numer_of_indices), n)
    。但是,如果您还不知道样本大小(更一般的情况),则必须动态生成索引以免浪费内存。在这种情况下,您只需使用
    index = random.randint(0, k - 1)
    生成
    k = numer_of_indices
    即可获取第一个索引,使用
    k = number_of_indices - n
    获取第
    n
    结果。只需检查我下面的代码(请注意,我在那里使用单边链表来存储已完成的索引。它使插入操作为 O(1) 操作,我们在这里需要大量插入)。
  • 如何从索引生成输出? 答案:好吧,假设我们的索引是
    i
    。那么
    i % 2000
    将是结果的
    A
    的索引。现在
    i // 2000
    可以递归地处理为剩余因子的笛卡尔积的索引。

这就是我想出的代码:

def random_order_cartesian_product(*factors):
    amount = functools.reduce(lambda prod, factor: prod * len(factor), factors, 1)
    index_linked_list = [None, None]
    for max_index in reversed(range(amount)):
        index = random.randint(0, max_index)
        index_link = index_linked_list
        while index_link[1] is not None and index_link[1][0] <= index:
            index += 1
            index_link = index_link[1]
        index_link[1] = [index, index_link[1]]
        items = []
        for factor in factors:
            items.append(factor[index % len(factor)])
            index //= len(factor)
        yield items

1
投票

以下生成器函数生成非重复样本。仅当生成的样本数量远小于可能样本的数量时,它才能高效工作。它还要求集合的元素是可散列的:

def samples(list_of_sets):
    list_of_lists = list(map(list, list_of_sets))  # choice only works on sequences
    seen = set()  # keep track of seen samples
    while True:
        x = tuple(map(random.choice, list_of_lists))  # tuple is hashable
        if x not in seen:
            seen.add(x)
            yield x

>>> lst = [{'b', 'a'}, {'c', 'd'}, {'f', 'e'}, {'g', 'h'}]
>>> gen = samples(lst)
>>> next(gen)
('b', 'c', 'f', 'g')
>>> next(gen)
('a', 'c', 'e', 'g')
>>> next(gen)
('b', 'd', 'f', 'h')
>>> next(gen)
('a', 'c', 'f', 'g')

1
投票

您可以使用

sample
库中的
random

import random
[[random.sample(x,1)[0] for x in list_of_sets] for _ in range(n)]

例如:

list_of_sets = [{1,2,3}, {4,5,6}, {1,4,7}]
n = 3

可能的输出是:

[[2, 4, 7], [1, 4, 7], [1, 6, 1]]

编辑:

如果我们想避免重复,我们可以使用

while
循环并将结果收集到
set
。此外,您可以检查
n
是否有效,并返回无效
n
值的笛卡尔积:

chosen = set()
if 0 < n < reduce(lambda a,b: a*b,[len(x) for x in list_of_sets]):
    while len(chosen) < n:
        chosen.add(tuple([random.sample(x,1)[0] for x in list_of_sets]))
else:
    chosen = itertools.product(*list_of_sets)

1
投票

这是一个完整的版本,带有示例和一些修改,以便于理解和使用:

import functools
import random

def random_order_cartesian_product(factors):
    amount = functools.reduce(lambda prod, factor: prod * len(factor), factors, 1)
    print(amount)
    print(len(factors[0]))
    index_linked_list = [None, None]
    for max_index in reversed(range(amount)):
        index = random.randint(0, max_index)
        index_link = index_linked_list
        while index_link[1] is not None and index_link[1][0] <= index:
            index += 1
            index_link = index_link[1]
        index_link[1] = [index, index_link[1]]
        items = []
        for factor in factors:
            items.append(factor[index % len(factor)])
            index //= len(factor)
        yield items
        

factors=[
    [1,2,3],
    [4,5,6],
    [7,8,9]
]

n = 5

all = random_order_cartesian_product(factors)

count = 0

for comb in all:
  print(comb)
  count += 1
  if count == n:
    break

1
投票

我们使用素数及其一个模 n 的原根创建一个序列,该序列在一个区间内访问每个数字一次。 更具体地说,我们正在寻找 整数乘法群模 n 的生成器。

我们必须选择比乘积稍大的素数

prod([len(i) for i in iterables)]

,因此我们必须考虑到索引错误的情况。

import random from math import gcd import math from math import prod from typing import Iterable def next_prime(number): if number < 0: raise ValueError('Negative numbers can not be primes') if number <= 1: return 2 if number % 2 == 0: number -= 1 while True: number += 2 max_check = int(math.sqrt(number)) + 2 for divider in range(3, max_check, 2): if number % divider == 0: break else: return number def is_primitive_root(a, n): phi = n - 1 factors = set() for i in range(2, int(phi ** 0.5) + 1): if phi % i == 0: factors.add(i) factors.add(phi // i) for factor in factors: if pow(a, factor, n) == 1: return False return True def find_random_primitive_root(n): while True: a = random.randint(2, n - 1) if gcd(a, n) == 1 and is_primitive_root(a, n): return a class CoordinateMapper: """ A class that maps linear indices to multi-dimensional coordinates within a specified shape. Args: dims (Iterable[int]): An iterable representing the dimensions of the desired shape. Example Usage: shape = (2, 3, 5, 4) mapper = CoordinateMapper(shape) coordinates = mapper.map(10) print(coordinates) # Output: [0, 1, 2, 2] """ def __init__(self, dims: Iterable[int]): self.moduli = [prod(dims[i:]) for i in range(len(dims))] self.divisors = [prod(dims[i + 1:]) for i in range(len(dims))] def map(self, n: int): return [(n % self.moduli[i]) // self.divisors[i] for i in range(len(self.moduli))] def sampler(l): close_prime = next_prime(l) state = root = find_random_primitive_root(close_prime) while state > l: state = (state * root) % close_prime # Inlining the computation leads to a 20% speed up yield state - 1 for i in range(l - 1): state = (state * root) % close_prime while state > l: state = (state * root) % close_prime yield state - 1 def _unique_combinations(*iterables): cartesian_product_cardinality = prod([len(i) for i in iterables]) coordinate_mapper = CoordinateMapper([len(i) for i in iterables]) sequence = sampler(cartesian_product_cardinality) for state in sequence: yield tuple(iterable[coord] for iterable, coord in zip(iterables, coordinate_mapper.map(state))) from itertools import product a = [1, 2, 3, 5] b = ["a", "b", "c", "d"] u = _unique_combinations(a, b) assert sorted(u) == sorted(product(a, b))
我开始对各种方法进行基准测试。除了 @matmarbon 之外,我无法获得任何解决方案来运行而不会出现断言错误:

from itertools import product import time approaches= { 'prime_roots':_unique_combinations, 'matmarbon':random_order_cartesian_product, 'itertools.product':itertools.product, } a = list(range(10)) b = list(range(10)) for name, approach in approaches.items(): assert sorted(u)==sorted(product(a,b))
对于我对以下 2 种算法进行了基准测试,以 itertools 作为基线

import pandas as pd import timeit import matplotlib.pyplot as plt def benchmark(approaches, list_lengths, num_repetitions): results = [] for approach, function in approaches.items(): for length in list_lengths: a = list(range(length)) b = list(range(length)) def f(): for item in function(a,b): pass execution_time = timeit.timeit(f, number=num_repetitions) entry = { 'Approach': approach, 'List Length': length, 'Execution Time': execution_time } print(entry) results.append(entry) results_df = pd.DataFrame(results) # Plot the benchmark results plt.figure(figsize=(10, 6)) for approach in approaches.keys(): approach_results = results_df[results_df['Approach'] == approach] plt.plot(approach_results['List Length'], approach_results['Execution Time'], marker='o', label=approach) plt.xlabel('List Length') plt.ylabel('Execution Time (seconds)') plt.title('Benchmark Results') plt.grid(True) plt.legend() plt.show() list_lengths = [10,20,30,40,50,60,70,80,90,100] num_repetitions = 3 benchmark(approaches, list_lengths, num_repetitions)

看来 @matmarbon 的算法正确的是 O(k^n)

。
质根在 
O(n^k)
 中执行,对于 
k~len(iterables)
(假设迭代大小稍微均匀)

在内存方面,素根方法获胜只是因为只需要

O(1)

内存并且不存储任何内容。

从分布角度来看,素根方法实际上并不是随机的,而是一个难以预测的确定性序列。在实践中,序列应该足够“随机”。

归功于这个

堆栈溢出答案,它启发了解决方案。


0
投票
因为我不想重复,有时代码不可能没有那么短。但正如 @andreyF 所说,

random.sample

 可以完成工作。也许还有一种更好的方法可以避免重复重采样,直到存在足够多的非重复采样,这是迄今为止我拥有的最好的方法。

import operator import random def get_cart_product(list_of_sets, n=None): max_products_num = reduce(operator.mul, [len(cluster) for cluster in list_of_sets], 1) if n is not None and n < max_products_num: refs = set() while len(refs) < n: refs.add(tuple(random.sample(cluster, 1)[0] for cluster in list_of_sets)) return refs return (prod for i, prod in zip(range(n), itertools.product(*list_of_sets))) return itertools.product(*list_of_sets)

请注意,代码假设有一个冻结集列表,否则应进行

random.sample(cluster, 1)[0]

 的转换。


0
投票
如果你的随机样本数量远小于你的笛卡尔积的大小,一个快速的解决方案就是维护你已经返回的索引的n元组:

def random_cartesian_product(*lists, seed: Optional[int] = None, n: int): rnd = random.Random(seed) cartesian_idxs: Set[Tuple[int, ...]] = set() list_lens: List[int] = [len(l) for l in lists] max_count: int = 1 for l_len in list_lens: max_count *= l_len if max_count < n: raise ValueError(f'At most {max_count} cartesian product elements can be created.') while len(cartesian_idxs) < n: rnd_idx: Tuple[int, ...] = tuple( rnd.randint(0, l_len - 1) for l_len in list_lens ) if rnd_idx not in cartesian_idxs: cartesian_idxs.add(rnd_idx) elem = [] for l_idx, l in zip(rnd_idx, lists): elem.append(l[l_idx]) yield elem
这可以迭代或一次性使用:

list(random_sample_cartesian_product(['a', 'b', 'c'], [1,2], ['x'], n=6, seed=None)) [['b', 1, 'x'], ['a', 1, 'x'], ['a', 2, 'x'], ['c', 1, 'x'], ['c', 2, 'x'], ['b', 2, 'x']]
    
© www.soinside.com 2019 - 2024. All rights reserved.