TensorFlow Module

The signxai.tf_signxai module provides explainability methods for TensorFlow models, leveraging the iNNvestigate library for LRP implementations.

Main Functions

signxai.tf_signxai.calculate_relevancemap(method, input_tensor, model, **kwargs)[source]

Calculates a relevance map for a given method, input, and model.

Parameters:
  • method (str) – Name of the explanation method

  • input_tensor (numpy.ndarray or tf.Tensor) – Input tensor (can be numpy array or TensorFlow tensor)

  • model (tf.keras.Model) – TensorFlow model

  • kwargs – Additional arguments for the specific method

Returns:

Relevance map as numpy array

Return type:

numpy.ndarray

signxai.tf_signxai.calculate_relevancemaps(method, inputs, model, **kwargs)

Calculates relevance maps for multiple inputs.

Parameters:
  • method (str) – Name of the explanation method

  • inputs (list or numpy.ndarray or tf.Tensor) – List or batch of input tensors

  • model (tf.keras.Model) – TensorFlow model

  • kwargs – Additional arguments for the specific method

Returns:

List or batch of relevance maps

Return type:

numpy.ndarray

Gradient-Based Methods

The methods module provides implementations of various explainability methods.

Vanilla Gradient

signxai.tf_signxai.methods.wrappers.gradient(model_no_softmax, x, **kwargs)[source]

Computes vanilla gradients of the model output with respect to the input.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs – Additional arguments including neuron_selection for specifying target class

Returns:

Gradient-based attribution

Return type:

numpy.ndarray

Input x Gradient

signxai.tf_signxai.methods.wrappers.input_t_gradient(model_no_softmax, x, **kwargs)[source]

Computes the element-wise product of gradients and input.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs – Additional arguments including neuron_selection for specifying target class

Returns:

Gradient x Input attribution

Return type:

numpy.ndarray

Integrated Gradients

signxai.tf_signxai.methods.wrappers.integrated_gradients(model_no_softmax, x, **kwargs)[source]

Computes integrated gradients by integrating gradients along a straight path from reference to input.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs

    Additional arguments including:

    • steps: Number of integration steps (default: 50)

    • reference_inputs: Baseline input (default: zeros)

    • neuron_selection: Target class

Returns:

Integrated gradients attribution

Return type:

numpy.ndarray

SmoothGrad

signxai.tf_signxai.methods.wrappers.smoothgrad(model_no_softmax, x, **kwargs)[source]

Computes smoothgrad by adding noise to the input and averaging the resulting gradients.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs

    Additional arguments including:

    • augment_by_n: Number of noisy samples (default: 50)

    • noise_scale: Scale of Gaussian noise (default: 0.2)

    • neuron_selection: Target class

Returns:

SmoothGrad attribution

Return type:

numpy.ndarray

SIGN Methods

The Sign module provides implementations of the SIGN explainability methods.

signxai.tf_signxai.methods.signed.calculate_sign_mu(x, mu=0, **kwargs)[source]

Calculates the sign with a threshold parameter mu.

Parameters:
  • x (numpy.ndarray) – Input tensor

  • mu (float) – Threshold parameter (default: 0)

  • kwargs – Additional arguments

Returns:

Sign tensor

Return type:

numpy.ndarray

Gradient x SIGN

signxai.tf_signxai.methods.signed.gradient_x_sign(model_no_softmax, x, **kwargs)

Computes the element-wise product of gradients and sign of the input.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs – Additional arguments including neuron_selection for specifying target class

Returns:

Gradient x SIGN attribution

Return type:

numpy.ndarray

signxai.tf_signxai.methods.signed.gradient_x_sign_mu(model_no_softmax, x, mu, **kwargs)

Computes the element-wise product of gradients and sign of the input with threshold parameter mu.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • mu (float) – Threshold parameter

  • kwargs – Additional arguments including neuron_selection for specifying target class

Returns:

Gradient x SIGN attribution with threshold

Return type:

numpy.ndarray

Guided Backpropagation

signxai.tf_signxai.methods.guided_backprop.guided_backprop(model_no_softmax, x, **kwargs)

Computes guided backpropagation by modifying the ReLU gradient to only pass positive gradients.

Parameters:
  • model_no_softmax (tf.keras.Model) – TensorFlow model with softmax removed

  • x (numpy.ndarray) – Input tensor

  • kwargs – Additional arguments including neuron_selection for specifying target class

Returns:

Guided backpropagation attribution

Return type:

numpy.ndarray

signxai.tf_signxai.methods.guided_backprop.guided_backprop_on_guided_model(model, x, layer_name=None, **kwargs)[source]

Creates a guided model and computes guided backpropagation.

Parameters:
  • model (tf.keras.Model) – TensorFlow model

  • x (numpy.ndarray) – Input tensor

  • layer_name (str, optional) – Target layer name (for GradCAM)

  • kwargs – Additional arguments

Returns:

Guided backpropagation attribution

Return type:

numpy.ndarray

Grad-CAM

signxai.tf_signxai.methods.grad_cam.calculate_grad_cam_relevancemap(x, model, last_conv_layer_name=None, neuron_selection=None, resize=True, **kwargs)[source]

Computes Grad-CAM by using the gradients of a target class with respect to feature maps of a convolutional layer.

Parameters:
  • x (numpy.ndarray) – Input tensor

  • model (tf.keras.Model) – TensorFlow model

  • last_conv_layer_name (str, optional) – Name of the last convolutional layer

  • neuron_selection (int, optional) – Target class

  • resize (bool, optional) – Whether to resize the output to match input dimensions

  • kwargs – Additional arguments

Returns:

Grad-CAM attribution

Return type:

numpy.ndarray

signxai.tf_signxai.methods.grad_cam.calculate_grad_cam_relevancemap_timeseries(x, model, last_conv_layer_name=None, neuron_selection=None, resize=True, **kwargs)[source]

Computes Grad-CAM specifically for time series data.

Parameters:
  • x (numpy.ndarray) – Input tensor (time series)

  • model (tf.keras.Model) – TensorFlow model

  • last_conv_layer_name (str, optional) – Name of the last convolutional layer

  • neuron_selection (int, optional) – Target class

  • resize (bool, optional) – Whether to resize the output to match input dimensions

  • kwargs – Additional arguments

Returns:

Grad-CAM attribution for time series

Return type:

numpy.ndarray

Layer-wise Relevance Propagation (LRP)

The iNNvestigate module provides LRP implementations for TensorFlow. This is the key integration point for iNNvestigate in SignXAI.

signxai.utils.utils.calculate_explanation_innvestigate(model, x, method, **kwargs)[source]

Interface to iNNvestigate for LRP and other methods.

Parameters:
  • model (tf.keras.Model) – TensorFlow model

  • x (numpy.ndarray) – Input tensor

  • method (str) – iNNvestigate method name (e.g., ‘lrp.z’, ‘lrp.epsilon’, etc.)

  • kwargs

    Additional arguments including:

    • neuron_selection: Target class

    • input_layer_rule: Input layer rule (‘Z’, ‘SIGN’, ‘Bounded’, etc.)

    • epsilon: Epsilon value for LRP-epsilon

    • stdfactor: Standard deviation factor for LRP with varying epsilon

Returns:

LRP attribution

Return type:

numpy.ndarray

LRP Variants

The module provides various LRP variants through iNNvestigate. Key implemented variants include:

  1. LRP-z: Basic LRP implementation

  2. LRP-epsilon: LRP with a stabilizing factor (epsilon)

  3. LRP-alpha-beta: LRP with separate treatment of positive and negative contributions

  4. LRP with SIGN Input Layer Rule: The novel SIGN method applied to LRP

  5. LRP Composite: Layer-specific LRP rules

Utility Functions

signxai.utils.utils.remove_softmax(model)[source]

Removes the softmax activation from a TensorFlow model.

Parameters:

model (tf.keras.Model) – TensorFlow model

Returns:

Model with softmax removed (outputs raw logits)

Return type:

tf.keras.Model