torchsparse.utils#

sparse_collate(inputs: List[SparseTensor]) SparseTensor[source]#

Assemble a batch of sparse tensors and add the batch dimension to coords.

Parameters:

inputs (List[SparseTensor]) – A list of sparse tensors.

Returns:

A batch of collated sparse tensors.

Return type:

SparseTensor

sparse_collate_fn(inputs: List[Any]) Any[source]#

Access the sparse tensors in the input list and call sparse_collate.

Parameters:

inputs (List[Any]) – A list of inputs.

Returns:

inputs with the sparse tensors collated.

Return type:

Any

sparse_quantize(coords, voxel_size: float | Tuple[float, ...] = 1, *, return_index: bool = False, return_inverse: bool = False) List[numpy.ndarray][source]#

Voxelize x, y, z coordinates and remove duplicates.

Parameters:
  • coords (np.ndarray) – An Nx3 array of x, y, z coordinates.

  • voxel_size (Union[float, Tuple[float, ...]]) – The voxel size.

  • return_index (bool) – Whether to return the indices of the voxels.

  • return_inverse (bool) – Whether to return the indices of the original.

Returns:

A list of voxelized coordinates.

Return type:

List[np.ndarray]

tune(model: ~torch.nn.modules.module.Module, data_loader: ~typing.Iterable, n_samples: int = 100, collect_fn: ~typing.Callable = <function <lambda>>, enable_fp16: bool = False, kmap_mode: str = 'hashmap', save_dir: str = '~/.torchsparse', tune_id: str = 'temp')[source]#

Search for the best group strategy by the provided model and data loader.

n_samples of samples will be used to tune the best group strategy. The tuned group configs will then be saved to save_dir/tune_id and loaded to model. If there is already a tuned group config in save_dir/tune_id, it will be loaded directly without doing the tuning.

Parameters:
  • model – A nn.Module to be profiled for best group configs.

  • data_loader – An iterator with data samples. Recommended to use the same data loader for training.

  • n_samples – Number of samples for tuning group configs.

  • collect_fn – Process data before calling model.forward(). In other words, run model(*collect_fn(data)) where data is yielded by data_loader. The default case handles {‘input’: SparseTensor,…} for data.

  • enable_fp16 – Whether to use half precision for tuning.

  • kmap_mode – The kernel map mode for tuning. Options are ‘hashmap’ and ‘grid’.

  • save_dir – The directory to save the tuned group configs.

  • tune_id – The id of this tuning run used for saving.

make_ntuple(x: int | List[int] | Tuple[int, ...] | Tensor, ndim: int) Tuple[int, ...][source]#