boofun.core.gpu_acceleration

GPU acceleration infrastructure for Boolean function computations.

This module provides GPU acceleration for computationally intensive operations using CuPy, Numba CUDA, and OpenCL backends with intelligent fallback to CPU.

Functions

benchmark_gpu_performance(operation, test_data)

Benchmark GPU vs CPU performance.

get_gpu_info()

Get GPU information and capabilities.

gpu_accelerate(operation, *args, **kwargs)

Accelerate an operation using GPU.

is_gpu_available()

Check if GPU acceleration is available.

set_gpu_backend(backend)

Set preferred GPU backend.

should_use_gpu(operation, data_size, n_vars)

Determine if GPU acceleration should be used for an operation.

Classes

CuPyAccelerator()

GPU acceleration using CuPy.

GPUAccelerator()

Abstract base class for GPU acceleration backends.

GPUDevice(device_id, name, memory_gb[, ...])

Represents a GPU device with its capabilities.

GPUManager()

Manages GPU acceleration across different backends.

NumbaAccelerator()

GPU acceleration using Numba CUDA.

class boofun.core.gpu_acceleration.GPUDevice(device_id: int, name: str, memory_gb: float, compute_capability: str | None = None, backend: str = 'unknown')[source]

Represents a GPU device with its capabilities.

__init__(device_id: int, name: str, memory_gb: float, compute_capability: str | None = None, backend: str = 'unknown')[source]

Initialize GPU device.

Parameters:
  • device_id – Device identifier

  • name – Device name

  • memory_gb – Available memory in GB

  • compute_capability – CUDA compute capability (if applicable)

  • backend – GPU backend (‘cupy’, ‘numba’, ‘opencl’)

class boofun.core.gpu_acceleration.GPUAccelerator[source]

Abstract base class for GPU acceleration backends.

abstractmethod is_available() bool[source]

Check if GPU acceleration is available.

abstractmethod get_devices() List[GPUDevice][source]

Get available GPU devices.

abstractmethod accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]

Accelerate batch truth table evaluation.

abstractmethod accelerate_fourier_batch(inputs: ndarray, coefficients: ndarray) ndarray[source]

Accelerate batch Fourier expansion evaluation.

abstractmethod accelerate_walsh_hadamard_transform(function_values: ndarray) ndarray[source]

Accelerate Walsh-Hadamard transform computation.

class boofun.core.gpu_acceleration.CuPyAccelerator[source]

GPU acceleration using CuPy.

__init__()[source]

Initialize CuPy accelerator.

is_available() bool[source]

Check if CuPy is available.

get_devices() List[GPUDevice][source]

Get available CuPy devices.

accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]

Accelerate batch truth table evaluation using CuPy.

accelerate_fourier_batch(inputs: ndarray, coefficients: ndarray) ndarray[source]

Accelerate batch Fourier evaluation using CuPy.

accelerate_walsh_hadamard_transform(function_values: ndarray) ndarray[source]

Accelerate Walsh-Hadamard transform using CuPy.

class boofun.core.gpu_acceleration.NumbaAccelerator[source]

GPU acceleration using Numba CUDA.

__init__()[source]

Initialize Numba CUDA accelerator.

is_available() bool[source]

Check if Numba CUDA is available.

get_devices() List[GPUDevice][source]

Get available CUDA devices.

accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]

Accelerate truth table evaluation using Numba CUDA.

accelerate_fourier_batch(inputs: ndarray, coefficients: ndarray) ndarray[source]

Accelerate Fourier evaluation using Numba CUDA.

accelerate_walsh_hadamard_transform(function_values: ndarray) ndarray[source]

Accelerate Walsh-Hadamard transform using Numba CUDA.

class boofun.core.gpu_acceleration.GPUManager[source]

Manages GPU acceleration across different backends.

Automatically selects the best available GPU backend and provides intelligent fallback to CPU computation.

__init__()[source]

Initialize GPU manager.

is_gpu_available() bool[source]

Check if GPU acceleration is available.

get_gpu_info() Dict[str, Any][source]

Get information about available GPU resources.

should_use_gpu(operation: str, data_size: int, n_vars: int) bool[source]

Determine if GPU acceleration should be used.

Uses heuristics based on operation type, data size, and problem complexity.

Parameters:
  • operation – Type of operation (‘truth_table’, ‘fourier’, ‘walsh_hadamard’)

  • data_size – Size of input data

  • n_vars – Number of variables

Returns:

True if GPU acceleration is recommended

accelerate_operation(operation: str, *args, **kwargs) ndarray[source]

Accelerate an operation using GPU if beneficial.

Parameters:
  • operation – Operation name

  • *args – Operation arguments

  • **kwargs

    Operation arguments

Returns:

Operation result

benchmark_operation(operation: str, test_data: Tuple, n_trials: int = 5) Dict[str, float][source]

Benchmark GPU vs CPU performance for an operation.

Parameters:
  • operation – Operation to benchmark

  • test_data – Test data tuple

  • n_trials – Number of benchmark trials

Returns:

Performance comparison results

clear_cache()[source]

Clear performance cache.

boofun.core.gpu_acceleration.is_gpu_available() bool[source]

Check if GPU acceleration is available.

boofun.core.gpu_acceleration.get_gpu_info() Dict[str, Any][source]

Get GPU information and capabilities.

boofun.core.gpu_acceleration.should_use_gpu(operation: str, data_size: int, n_vars: int) bool[source]

Determine if GPU acceleration should be used for an operation.

boofun.core.gpu_acceleration.gpu_accelerate(operation: str, *args, **kwargs) ndarray[source]

Accelerate an operation using GPU.

boofun.core.gpu_acceleration.benchmark_gpu_performance(operation: str, test_data: Tuple, n_trials: int = 5) Dict[str, float][source]

Benchmark GPU vs CPU performance.

boofun.core.gpu_acceleration.set_gpu_backend(backend: str)[source]

Set preferred GPU backend.

Parameters:

backend – Backend name (‘cupy’, ‘numba’, ‘auto’)