torchsparse.nn.modules#
- class ReLU(inplace: bool = False)[source]#
Bases:
ReLU
ReLU activation function.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class LeakyReLU(negative_slope: float = 0.01, inplace: bool = False)[source]#
Bases:
LeakyReLU
LeakyReLU activation function.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ToBEVConvolution(in_channels: int, out_channels: int, n_kernels: int, stride: int = 1, dim: int = 1, bias: bool = False)[source]#
Bases:
Module
Converts a SparseTensor into a sparse BEV feature map.
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input: SparseTensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ToBEVReduction(dim: int = 1)[source]#
Bases:
Module
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ToDenseBEVConvolution(in_channels: int, out_channels: int, shape: List[int] | Tuple[int, int, int] | Tensor, offset: Tuple[int, int, int] = (0, 0, 0), dim: int = 1, bias: bool = False)[source]#
Bases:
Module
Converts a SparseTensor into a dense BEV feature map.
Group points with the same z value together and apply the same FC kernel. Aggregate the results by summing up all features within one BEV grid.
Note
This module consumes larger memory than
ToBEVHeightCompression
.- Parameters:
in_channels – Number of input channels
out_channels – Number of output channels
shape – Shape of BEV map
dim – Dimension index for z (default: 1 for KITTI coords)
bias – Whether to use bias
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input: SparseTensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ToBEVHeightCompression(channels: int, shape: List[int] | Tuple[int, int, int] | Tensor, offset: Tuple[int, int, int] = (0, 0, 0), dim: int = 1)[source]#
Bases:
Module
Converts a SparseTensor to a flattened volumetric tensor.
- Parameters:
channels – Number of input channels (Note: output channels = channels x #unique z values)
shape – Shape of BEV map
dim – Dimension index for z (default: 1 for KITTI coords)
- extra_repr() str [source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input: SparseTensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class Conv3d(in_channels: int, out_channels: int, kernel_size: int | List[int] | Tuple[int, ...] = 3, stride: int | List[int] | Tuple[int, ...] = 1, dilation: int = 1, bias: bool = False, transposed: bool = False, config: Dict | None = None)[source]#
Bases:
Module
3D convolution layer for a sparse tensor.
- Parameters:
in_channels (int) – Number of channels in the input sparse tensor.
out_channels (int) – Number of channels in the output sparse tensor.
kernel_size (int or tuple) – Size of the 3D convolving kernel.
stride (int or tuple) – Stride of the convolution. Default: 1.
dilation (int or tuple) – Spacing between kernel elements. Default: 1.
bias (bool) – If True, add a learnable bias to the output. Default: True.
transposed (bool) – If True, use transposed convolution. Default: False.
config (dict) – The 3D convolution configuration, which includes the
kmap_mode
(hashmap or grid), andepsilon
(redundant computation tolerance) andmm_thresh
(mm/bmm threshold) when using the adaptive matmul grouping. Default: None.
- extra_repr() str [source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SparseCrop(coords_min: Tuple[int, ...] | None = None, coords_max: Tuple[int, ...] | None = None)[source]#
Bases:
Module
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class BatchNorm(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True, device=None, dtype=None)[source]#
Bases:
BatchNorm1d
Batch normalization layer.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GroupNorm(num_groups: int, num_channels: int, eps: float = 1e-05, affine: bool = True, device=None, dtype=None)[source]#
Bases:
GroupNorm
Group normalization layer.
- forward(input: SparseTensor) SparseTensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GlobalAvgPool(*args, **kwargs)[source]#
Bases:
Module
Global average pooling layer.
- forward(input: SparseTensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GlobalMaxPool(*args, **kwargs)[source]#
Bases:
Module
Global max pooling layer.
- forward(input: SparseTensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.