boofun.core.gpu_acceleration
GPU acceleration infrastructure for Boolean function computations.
This module provides GPU acceleration for computationally intensive operations using CuPy, Numba CUDA, and OpenCL backends with intelligent fallback to CPU.
Functions
|
Benchmark GPU vs CPU performance. |
Get GPU information and capabilities. |
|
|
Accelerate an operation using GPU. |
Check if GPU acceleration is available. |
|
|
Set preferred GPU backend. |
|
Determine if GPU acceleration should be used for an operation. |
Classes
GPU acceleration using CuPy. |
|
Abstract base class for GPU acceleration backends. |
|
|
Represents a GPU device with its capabilities. |
Manages GPU acceleration across different backends. |
|
GPU acceleration using Numba CUDA. |
- class boofun.core.gpu_acceleration.GPUDevice(device_id: int, name: str, memory_gb: float, compute_capability: str | None = None, backend: str = 'unknown')[source]
Represents a GPU device with its capabilities.
- __init__(device_id: int, name: str, memory_gb: float, compute_capability: str | None = None, backend: str = 'unknown')[source]
Initialize GPU device.
- Parameters:
device_id – Device identifier
name – Device name
memory_gb – Available memory in GB
compute_capability – CUDA compute capability (if applicable)
backend – GPU backend (‘cupy’, ‘numba’, ‘opencl’)
- class boofun.core.gpu_acceleration.GPUAccelerator[source]
Abstract base class for GPU acceleration backends.
- abstractmethod accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]
Accelerate batch truth table evaluation.
- class boofun.core.gpu_acceleration.CuPyAccelerator[source]
GPU acceleration using CuPy.
- accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]
Accelerate batch truth table evaluation using CuPy.
- class boofun.core.gpu_acceleration.NumbaAccelerator[source]
GPU acceleration using Numba CUDA.
- accelerate_truth_table_batch(inputs: ndarray, truth_table: ndarray) ndarray[source]
Accelerate truth table evaluation using Numba CUDA.
- class boofun.core.gpu_acceleration.GPUManager[source]
Manages GPU acceleration across different backends.
Automatically selects the best available GPU backend and provides intelligent fallback to CPU computation.
- should_use_gpu(operation: str, data_size: int, n_vars: int) bool[source]
Determine if GPU acceleration should be used.
Uses heuristics based on operation type, data size, and problem complexity.
- Parameters:
operation – Type of operation (‘truth_table’, ‘fourier’, ‘walsh_hadamard’)
data_size – Size of input data
n_vars – Number of variables
- Returns:
True if GPU acceleration is recommended
- accelerate_operation(operation: str, *args, **kwargs) ndarray[source]
Accelerate an operation using GPU if beneficial.
- Parameters:
operation – Operation name
*args – Operation arguments
**kwargs –
Operation arguments
- Returns:
Operation result
- benchmark_operation(operation: str, test_data: Tuple, n_trials: int = 5) Dict[str, float][source]
Benchmark GPU vs CPU performance for an operation.
- Parameters:
operation – Operation to benchmark
test_data – Test data tuple
n_trials – Number of benchmark trials
- Returns:
Performance comparison results
- boofun.core.gpu_acceleration.is_gpu_available() bool[source]
Check if GPU acceleration is available.
- boofun.core.gpu_acceleration.get_gpu_info() Dict[str, Any][source]
Get GPU information and capabilities.
- boofun.core.gpu_acceleration.should_use_gpu(operation: str, data_size: int, n_vars: int) bool[source]
Determine if GPU acceleration should be used for an operation.
- boofun.core.gpu_acceleration.gpu_accelerate(operation: str, *args, **kwargs) ndarray[source]
Accelerate an operation using GPU.