reconstruction_module
- create_reconstruction_settings(speed_of_sound_in_m_per_s: int = 1540, time_spacing_in_s: float = 2.5e-08, sensor_spacing_in_mm: float = 0.1, recon_mode: str = 'pressure', apodization: str = 'BoxApodization') Settings [source]
Function that creates SIMPA settings for reconstruction convenience function.
- Parameters:
speed_of_sound_in_m_per_s – (int) speed of sound in medium in meters per second (default: 1540 m/s)
time_spacing_in_s – (float) time between sampling points in seconds (default: 2.5e-8 s which is equal to 40 MHz)
sensor_spacing_in_mm – (float) space between sensor elements in millimeters (default: 0.1 mm)
recon_mode – SIMPA Tag defining the reconstruction mode - pressure default OR differential
apodization – SIMPA Tag defining the apodization function (default box)
- Returns:
SIMPA settings
- class DelayAndSumAdapter(global_settings: Settings)[source]
Bases:
ReconstructionAdapterBase
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- reconstruction_algorithm(time_series_sensor_data, detection_geometry: DetectionGeometryBase)[source]
Applies the Delay and Sum beamforming algorithm [1] to the time series sensor data (2D numpy array where the first dimension corresponds to the sensor elements and the second to the recorded time steps) with the given beamforming settings (dictionary). A reconstructed image (2D numpy array) is returned. This implementation uses PyTorch Tensors to perform computations and is able to run on GPUs.
[1] T. Kirchner et al. 2018, “Signed Real-Time Delay Multiply and Sum Beamforming for Multispectral Photoacoustic Imaging”, https://doi.org/10.3390/jimaging4100121
- reconstruct_delay_and_sum_pytorch(time_series_sensor_data: ndarray, detection_geometry: DetectionGeometryBase, speed_of_sound_in_m_per_s: int = 1540, time_spacing_in_s: float = 2.5e-08, sensor_spacing_in_mm: float = 0.1, recon_mode: str = 'pressure', apodization: str = 'BoxApodization') ndarray [source]
Convenience function for reconstructing time series data using Delay and Sum algorithm implemented in PyTorch
- Parameters:
time_series_sensor_data – (2D numpy array) sensor data of shape (sensor elements, time steps)
detection_geometry – The DetectionGeometryBase that should be used to reconstruct the given time series data
speed_of_sound_in_m_per_s – (int) speed of sound in medium in meters per second (default: 1540 m/s)
time_spacing_in_s – (float) time between sampling points in seconds (default: 2.5e-8 s which is equal to 40 MHz)
sensor_spacing_in_mm – (float) space between sensor elements in millimeters (default: 0.1 mm)
recon_mode – SIMPA Tag defining the reconstruction mode - pressure default OR differential
apodization – SIMPA Tag defining the apodization function (default box)
- Returns:
(2D numpy array) reconstructed image as 2D numpy array
- class DelayMultiplyAndSumAdapter(global_settings: Settings)[source]
Bases:
ReconstructionAdapterBase
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- reconstruction_algorithm(time_series_sensor_data, detection_geometry: DetectionGeometryBase)[source]
Applies the Delay Multiply and Sum beamforming algorithm [1] to the time series sensor data (2D numpy array where the first dimension corresponds to the sensor elements and the second to the recorded time steps) with the given beamforming settings (dictionary). A reconstructed image (2D numpy array) is returned. This implementation uses PyTorch Tensors to perform computations and is able to run on GPUs.
[1] T. Kirchner et al. 2018, “Signed Real-Time Delay Multiply and Sum Beamforming for Multispectral Photoacoustic Imaging”, https://doi.org/10.3390/jimaging4100121
- reconstruct_delay_multiply_and_sum_pytorch(time_series_sensor_data: ndarray, detection_geometry: DetectionGeometryBase, speed_of_sound_in_m_per_s: int = 1540, time_spacing_in_s: float = 2.5e-08, sensor_spacing_in_mm: float = 0.1, recon_mode: str = 'pressure', apodization: str = 'BoxApodization') ndarray [source]
Convenience function for reconstructing time series data using Delay and Sum algorithm implemented in PyTorch
- Parameters:
time_series_sensor_data – (2D numpy array) sensor data of shape (sensor elements, time steps)
detection_geometry – The DetectioNGeometryBase to use for the reconstruction of the given time series data
speed_of_sound_in_m_per_s – (int) speed of sound in medium in meters per second (default: 1540 m/s)
time_spacing_in_s – (float) time between sampling points in seconds (default: 2.5e-8 s which is equal to 40 MHz)
sensor_spacing_in_mm – (float) space between sensor elements in millimeters (default: 0.1 mm)
recon_mode – SIMPA Tag defining the reconstruction mode - pressure default OR differential
apodization – SIMPA Tag defining the apodization function (default box)
- Returns:
(2D numpy array) reconstructed image as 2D numpy array
- class ReconstructionAdapterBase(global_settings: Settings)[source]
Bases:
SimulationModuleBase
This class is the main entry point to perform image reconstruction using the SIMPA toolkit. All information necessary for the respective reconstruction method must be contained in the respective settings dictionary.
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- load_component_settings() Settings [source]
Implements abstract method to serve reconstruction settings as component settings
- Returns:
Settings: reconstruction component settings
- abstract reconstruction_algorithm(time_series_sensor_data, detection_geometry: DetectionGeometryBase) ndarray [source]
A deriving class needs to implement this method according to its model.
- Parameters:
time_series_sensor_data – the time series sensor data
detection_geometry –
- Returns:
a reconstructed photoacoustic image
- class ReconstructionTestAdapter(global_settings: Settings)[source]
Bases:
ReconstructionAdapterBase
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- apply_b_mode(data: Optional[ndarray] = None, method: Optional[str] = None) ndarray [source]
Applies B-Mode specified method to data. Method is either envelope detection using hilbert transform (Tags.RECONSTRUCTION_BMODE_METHOD_HILBERT_TRANSFORM), absolute value (Tags.RECONSTRUCTION_BMODE_METHOD_ABS) or none if nothing is specified is performed.
- Parameters:
data – (numpy array) data used for applying B-Mode method
method – (str) Tags.RECONSTRUCTION_BMODE_METHOD_HILBERT_TRANSFORM or Tags.RECONSTRUCTION_BMODE_METHOD_ABS
- Returns:
(numpy array) data with B-Mode method applied, all
- bandpass_filter_with_settings(data: ndarray, global_settings: Settings, component_settings: Settings, device: DetectionGeometryBase) ndarray [source]
Applies corresponding bandpass filter which can be set in component_settings[Tags.BANDPASS_FILTER_METHOD], using Tukey window-based filter as default.
- Parameters:
data – (numpy array) data to be filtered
global_settings – (Settings) settings for the whole simulation
component_settings – (Settings) settings for the reconstruction module
device –
- Returns:
(numpy array) filtered data
- butter_bandpass_filtering(data: array, time_spacing_in_ms: Optional[float] = None, cutoff_lowpass_in_Hz: int = 8000000, cutoff_highpass_in_Hz: int = 100000, order: int = 1) ndarray [source]
Apply a butterworth bandpass filter of order with cutoff values at cutoff_lowpass_in_Hz and cutoff_highpass_in_Hz Hz on the data using the scipy.signal.butter filter.
- Parameters:
data – (numpy array) data to be filtered
time_spacing_in_ms – (float) time spacing in milliseconds, e.g. 2.5e-5
cutoff_lowpass_in_Hz – (int) Signal above this value will be ignored (in Hz)
cutoff_highpass_in_Hz – (int) Signal below this value will be ignored (in Hz)
order – (int) order of the filter
- Returns:
(numpy array) filtered data
- butter_bandpass_filtering_with_settings(data: ndarray, global_settings: Settings, component_settings: Settings, device: DetectionGeometryBase) ndarray [source]
Apply a butterworth bandpass filter of order with cutoff values at cutoff_lowpass_in_Hz and cutoff_highpass_in_Hz Hz on the data using the scipy.signal.butter filter. Those values are obtained from the global_settings, component_settings, and device.
- Parameters:
data – (numpy array) data to be filtered
global_settings – (Settings) settings for the whole simulation
component_settings – (Settings) settings for the reconstruction module
device –
- Returns:
(numpy array) filtered data
- compute_delay_and_sum_values(time_series_sensor_data: Tensor, sensor_positions: tensor, xdim: int, ydim: int, zdim: int, xdim_start: int, xdim_end: int, ydim_start: int, ydim_end: int, zdim_start: int, zdim_end: int, spacing_in_mm: float, speed_of_sound_in_m_per_s: float, time_spacing_in_ms: float, logger: Logger, torch_device: device, component_settings: Settings) Tuple[tensor, int] [source]
Perform the core computation of Delay and Sum, without summing up the delay dependend values.
Returns - values (torch tensor) of the time series data corrected for delay and sensor positioning, ready to be summed up - and n_sensor_elements (int) which might be used for later computations
- compute_image_dimensions(field_of_view_in_mm: ndarray, spacing_in_mm: float, logger: Logger) Tuple[int, int, int, float64, float64, float64, float64, float64, float64] [source]
Computes size of beamformed image from field of view of detection geometry given the spacing.
- Parameters:
field_of_view_in_mm – field of view in mm as list of xdim_start, xdim_end, ydim_start, ydim_end,
zdim_start, zdim_end :type field_of_view_in_mm: numpy ndarray :param spacing_in_mm: space betwenn pixels in mm :type spacing_in_mm: float :param logger: logger for debugging purposes :type logger: Logger :returns: tuple with x,z,y dimensions of reconstructed image volume in pixels as well as the range for each dimension as start and end pixels (xdim, zdim, ydim, xdim_start, xdim_end, ydim_start, ydim_end, zdim_start, zdim_end) :rtype: Tuple[int, int, int, np.float64, np.float64, np.float64, np.float64, np.float64, np.float64]
- get_apodization_factor(apodization_method: str = 'BoxApodization', dimensions: Optional[tuple] = None, n_sensor_elements=None, device: device = 'cpu') tensor [source]
Construct apodization factors according to apodization_method [hann, hamming or box apodization (default)] for given dimensions and n_sensor_elements.
- Parameters:
apodization_method – (str) Apodization method, one of Tags.RECONSTRUCTION_APODIZATION_HANN, Tags.RECONSTRUCTION_APODIZATION_HAMMING and Tags.RECONSTRUCTION_APODIZATION_BOX (default)
dimensions – (tuple) size of each dimension of reconstructed image as int, might have 2 or 3 entries.
n_sensor_elements – (int) number of sensor elements
device – (torch device) PyTorch tensor device
- Returns:
(torch tensor) tensor with apodization factors which can be multipied with DAS values
- preparing_reconstruction_and_obtaining_reconstruction_settings(time_series_sensor_data: ndarray, component_settings: Settings, global_settings: Settings, detection_geometry: DetectionGeometryBase, logger: Logger) Tuple[tensor, tensor, float, float, float, device] [source]
Performs all preparation steps that need to be done before reconstructing an image: - performs envelope detection of time series data if specified - obtains speed of sound value from settings - obtains time spacing value from settings or PA device - obtain spacing from settings - checks PA device prerequisites - obtains sensor positions from PA device - moves data arrays on correct torch device - computed differential mode if specified - perform bandpass filtering if specified
Returns:
time_series_sensor_data: (torch tensor) potentially preprocessed time series data sensor_positions: (torch tensor) sensor element positions of PA device speed_of_sound_in_m_per_s: (float) speed of sound in m/s spacing_in_mm: (float) spacing of voxels in reconstructed image in mm time_spacing_in_ms: (float) temporal spacing of the time series data in ms torch_device: (torch device) either cpu or cuda GPU device used for the tensors
- reconstruction_mode_transformation(time_series_sensor_data: Optional[tensor] = None, mode: str = 'pressure') tensor [source]
Transformes time_series_sensor_data for other modes, for example Tags.RECONSTRUCTION_MODE_DIFFERENTIAL. Default mode is Tags.RECONSTRUCTION_MODE_PRESSURE.
- Parameters:
time_series_sensor_data – (torch tensor) Time series data to be transformed
mode – (str) reconstruction mode: Tags.RECONSTRUCTION_MODE_PRESSURE (default) or Tags.RECONSTRUCTION_MODE_DIFFERENTIAL
- Returns:
(torch tensor) potentially transformed tensor
- tukey_bandpass_filtering(data: ndarray, time_spacing_in_ms: Optional[float] = None, cutoff_lowpass_in_Hz: int = 8000000, cutoff_highpass_in_Hz: int = 100000, tukey_alpha: float = 0.5, resampling_for_fft: bool = False) ndarray [source]
Apply a tukey bandpass filter with cutoff values at cutoff_lowpass_in_Hz and cutoff_highpass_in_Hz Hz and a tukey window with alpha value of tukey_alpha inbetween on the data in Fourier space. Note that the filter operates only on positive frequencies using rfft (One can use rfft since we only use real valued data.). Filtering is performed along the last dimension.
Warning: Depending on the number of data points and time spacing the represented frequencies may not be ideal which leads to rounding issues.
- Parameters:
data – (numpy array) data to be filtered
time_spacing_in_ms – (float) time spacing in milliseconds, e.g. 2.5e-5
cutoff_lowpass_in_Hz – (int) Signal above this value will be ignored, Signal at this value will be included as long as it is represented in the freqeuency space (in Hz)
cutoff_highpass_in_Hz – (int) Signal below this value will be ignored, Signal at this value will be included as long as it is represented in the freqeuency space (in Hz)
tukey_alpha – (float) transition value between 0 (rectangular) and 1 (Hann window)
resampling_for_fft – (bool) whether the data is resampled to a power of 2 in time dimension before applying the FFT and resampled back after filtering
- Returns:
(numpy array) filtered data
- tukey_bandpass_filtering_with_settings(data: ndarray, global_settings: Settings, component_settings: Settings, device: DetectionGeometryBase) ndarray [source]
Apply a tukey bandpass filter with cutoff values at cutoff_lowpass_in_Hz and cutoff_highpass_in_Hz Hz and a tukey window with alpha value of tukey_alpha inbetween on the data in Fourier space. Those values are obtained from the global_settings, component_settings, and device. Note that the filter operates on both, negative and positive frequencies similarly.
- Parameters:
data – (numpy array) data to be filtered
global_settings – (Settings) settings for the whole simulation
component_settings – (Settings) settings for the reconstruction module
device –
- Returns:
(numpy array) filtered data
- tukey_window_function(target_size: int, time_spacing_in_ms: float, cutoff_lowpass_in_Hz: int, cutoff_highpass_in_Hz: int, tukey_alpha: float) ndarray [source]
Creates the tukey window in wanted frequency space for given cutoff frequencies and alpha value.
- Parameters:
target_size (int) – number of time steps in time series data
time_spacing_in_ms (float) – time spacing in ms of time series data
cutoff_lowpass_in_Hz (int) – Signal above this value will be ignored, Signal at this value will be included as long as it is represented in the freqeuency space (in Hz)
cutoff_highpass_in_Hz (int) – Signal below this value will be ignored, Signal at this value will be included as long as it is represented in the freqeuency space (in Hz)
tukey_alpha (float) – transition value between 0 (rectangular) and 1 (Hann window)
- Returns:
tukey window for filtering time series data
- Return type:
np.ndarray
- class SignedDelayMultiplyAndSumAdapter(global_settings: Settings)[source]
Bases:
ReconstructionAdapterBase
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- reconstruction_algorithm(time_series_sensor_data, detection_geometry: DetectionGeometryBase)[source]
Applies the signed Delay Multiply and Sum beamforming algorithm [1] to the time series sensor data (2D numpy array where the first dimension corresponds to the sensor elements and the second to the recorded time steps) with the given beamforming settings (dictionary). A reconstructed image (2D numpy array) is returned. This implementation uses PyTorch Tensors to perform computations and is able to run on GPUs.
[1] T. Kirchner et al. 2018, “Signed Real-Time Delay Multiply and Sum Beamforming for Multispectral Photoacoustic Imaging”, https://doi.org/10.3390/jimaging4100121
- reconstruct_signed_delay_multiply_and_sum_pytorch(time_series_sensor_data: ndarray, detection_geometry: DetectionGeometryBase, speed_of_sound_in_m_per_s: int = 1540, time_spacing_in_s: float = 2.5e-08, sensor_spacing_in_mm: float = 0.1, recon_mode: str = 'pressure', apodization: str = 'BoxApodization') ndarray [source]
Convenience function for reconstructing time series data using Delay and Sum algorithm implemented in PyTorch
- Parameters:
time_series_sensor_data – (2D numpy array) sensor data of shape (sensor elements, time steps)
speed_of_sound_in_m_per_s – (int) speed of sound in medium in meters per second (default: 1540 m/s)
time_spacing_in_s – (float) time between sampling points in seconds (default: 2.5e-8 s which is equal to 40 MHz)
sensor_spacing_in_mm – (float) space between sensor elements in millimeters (default: 0.1 mm)
recon_mode – SIMPA Tag defining the reconstruction mode - pressure default OR differential
apodization – SIMPA Tag defining the apodization function (default box)
- Returns:
(2D numpy array) reconstructed image as 2D numpy array
- class TimeReversalAdapter(global_settings: Settings)[source]
Bases:
ReconstructionAdapterBase
The time reversal adapter includes the time reversal reconstruction algorithm implemented by the k-Wave toolkit into SIMPA.
Time reversal reconstruction uses the time series data and computes the forward simulation model backwards in time:
Treeby, Bradley E., Edward Z. Zhang, and Benjamin T. Cox. "Photoacoustic tomography in absorbing acoustic media using time reversal." Inverse Problems 26.11 (2010): 115003.
- Parameters:
global_settings (Settings) – The SIMPA settings dictionary
- get_acoustic_properties(input_data: dict, detection_geometry)[source]
This method extracts the acoustic tissue properties from the settings dictionary and amends the information to the input_data.
- Parameters:
input_data – a dictionary containing the information needed for time reversal.
detection_geometry – PA device that is used for reconstruction
- reconstruction_algorithm(time_series_sensor_data, detection_geometry)[source]
A deriving class needs to implement this method according to its model.
- Parameters:
time_series_sensor_data – the time series sensor data
detection_geometry –
- Returns:
a reconstructed photoacoustic image
- reorder_time_series_data(time_series_sensor_data, detection_geometry)[source]
Reorders the time series data to match the order that is assumed by kwave during image reconstruction with TimeReversal.
The main issue here is, that, while forward modelling allows for the definition of 3D cuboid bounding boxes for the detector elements, TimeReversal does not implement this feature. Instead, a binary mask is given and these are indexed in a column-row-wise manner in the output. The default np.argsort() method does not yield the same result as expected by k-Wave. Hence, this workaround.