Various helper functions

class helpers.LightwoodAutocast(enabled=True)[source]

Equivalent to torch.cuda.amp.autocast, but checks device compute capability to activate the feature only when the GPU has tensor cores to leverage AMP.


  • active (bool): Whether AMP is currently active. This attribute is at the class



```python >>> import lightwood.helpers.torch as lt >>> with lt.LightwoodAutocast(): … # This code will be executed in AMP mode. … pass

Initializes the context manager for Automatic Mixed Precision (AMP) functionality.


enabled (bool, optional) – Whether to enable AMP. Defaults to True.

helpers.add_tn_num_conf_bounds(data, tss_args)[source]

Deprecated. Instead we now opt for the much better solution of having scores for each timestep (see all TS classes in analysis/nc)

Add confidence (and bounds if applicable) to t+n predictions, for n>1 TODO: active research question: how to guarantee 1-e coverage for t+n, n>1 For now, (conservatively) increases width by the confidence times the log of the time step (and a scaling factor).

helpers.concat_vectors_and_pad(vec_list, max_)[source]

Concatenates a list of input vectors and pads them to match a specified maximum length.

This function takes a list of input vectors, concatenates them along a specified dimension (dim=0), and then pads the concatenated vector to achieve a specified maximum length. The padding is done with zeros.

  • vec_list (list of torch.Tensor) – List of input vectors to concatenate and pad.

  • max (int) – The maximum length of the concatenated and padded vector.


The concatenated and padded vector.

Return type:



AssertionError – If the length of ‘vec_list’ is not greater than 0, or if it exceeds ‘max_len’, or if ‘max_len’ is not greater than 0.


>>> input_tensors = [torch.tensor([1, 2]), torch.tensor([3, 4, 5])]
>>> max_length = 5
>>> concatenated_padded = concat_vectors_and_pad(input_tensors, max_length)
>>> print(concatenated_padded)
tensor([1, 2, 3, 4, 5])

Get the appropriate Torch device(s) based on CUDA availability and compatibility.

This function determines the appropriate Torch device(s) to be used for computations based on the availability of CUDA and compatible devices. It checks if CUDA is available and if the available CUDA devices are compatible according to the ‘is_cuda_compatible()’ function. If compatible devices are found, the function selects either the first available CUDA device or a randomly selected one based on the ‘RANDOM_GPU’ environment variable. If CUDA is not available or no compatible devices are found, the function returns the CPU device.


A tuple containing the selected Torch device and the number of available devices.

Return type:



>>> device, num_devices = get_devices()
>>> print(device)
>>> print(num_devices)

Check if the system has CUDA-compatible devices with the required architecture and compiled CUDA version.

This function checks the compatibility of CUDA devices available on the system by comparing their architectures and the compiled CUDA version. It iterates through the available devices and verifies if their architectures meet the minimum requirement specified by the function, and also checks if the compiled CUDA version is greater than a specific version.


True if there are compatible CUDA devices, otherwise False.

Return type:



>>> is_compatible = is_cuda_compatible()
>>> print(is_compatible)

Pandas has no way to guarantee “stability” for the type of a column, it choses to arbitrarily change it based on the values. Pandas also change the values in the columns based on the types. Lightwood relies on having None values for a cells that represent “missing” or “corrupt”.

When we assign None to a cell in a dataframe this might get turned to nan or other values, this function checks if a cell is None or any other values a pd.DataFrame might convert None to.

It also checks some extra values (like '') that pandas never converts None to (hopefully). But lightwood would still consider those values “None values”, and this will allow for more generic use later.