model package

Submodules

model.cvffnn module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.cvffnn.CVFFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Complex Valued FeedForward Neural Network (CVFFNN) class.

This class handles feedforward, backpropagation, and layer addition operations for a complex-valued neural network.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, activation=<function tanh>, weights_rate=0.001, biases_rate=0.001, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Add a layer to the neural network.

Parameters:
  • neurons (int) – The number of neurons in the layer.

  • ishape (int, optional) – The input shape for the layer. Defaults to 0.

  • weights_initializer (function, optional) – Function to initialize the weights. Defaults to random_normal.

  • bias_initializer (function, optional) – Function to initialize the biases. Defaults to random_normal.

  • activation (function, optional) – Activation function to use. Defaults to tanh.

  • weights_rate (float, optional) – The learning rate for the weights. Defaults to 0.001.

  • biases_rate (float, optional) – The learning rate for the biases. Defaults to 0.001.

  • reg_strength (float, optional) – The regularization strength. Defaults to 0.0.

  • lambda_init (float, optional) – The initial lambda for regularization. Defaults to 0.1.

  • lr_decay_method (function, optional) – Method for decaying the learning rate. Defaults to none_decay.

  • lr_decay_rate (float, optional) – The rate at which the learning rate decays. Defaults to 0.0.

  • lr_decay_steps (int, optional) – The number of steps after which the learning rate decays. Defaults to 1.

  • module (object, optional) – The module (e.g., NumPy or CuPy) to be used for computation. Defaults to None.

backprop(y, y_pred, epoch)[source]

Perform the backpropagation operation on the neural network.

Parameters:
  • y (numpy.array or cupy.array) – The true target values.

  • y_pred (numpy.array or cupy.array) – The predicted values from the network.

  • epoch (int) – The current training epoch.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Perform the feedforward operation on the neural network.

Parameters:

input_data (numpy.array or cupy.array) – The input data to feed into the network.

Returns:

The output of the final layer after feedforward.

Return type:

numpy.array or cupy.array

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

model.cvrbfnn module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.cvrbfnn.CVRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Specification for the Complex Valued Radial Basis Function Neural Network. This includes the feedforward, backpropagation, and adding layer methods specifics. This class derives from NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_crbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones_real>, gamma_initializer=<function opt_crbf_gamma>, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a layer to the neural network.

Parameters:

neuronsint

Number of neurons in the layer.

ishapeint, optional

Input shape for the layer.

oshapeint, optional

Output shape for the layer.

weights_initializerfunction, optional

Function to initialize the weights.

bias_initializerfunction, optional

Function to initialize the biases.

sigma_initializerfunction, optional

Function to initialize sigma values.

gamma_initializerfunction, optional

Function to initialize gamma values.

weights_ratefloat, optional

Learning rate for weights.

biases_ratefloat, optional

Learning rate for biases.

gamma_ratefloat, optional

Learning rate for gamma.

sigma_ratefloat, optional

Learning rate for sigma.

reg_strengthfloat, optional

Regularization strength.

lambda_initfloat, optional

Initial lambda value for regularization.

lr_decay_methodfunction, optional

Learning rate decay method.

lr_decay_ratefloat, optional

Learning rate decay rate.

lr_decay_stepsint, optional

Learning rate decay steps.

modulemodule, optional

Module for computation (e.g., numpy, cupy).

Returns:

None

backprop(y, y_pred, epoch)[source]

Performs the backpropagation operation on the neural network.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values from the neural network.

epochint

The current epoch number.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean, std_dev)[source]

Denormalize the output data.

Parameters:

normalized_output_dataarray-like

Normalized output data.

meanfloat

Mean value for denormalization.

std_devfloat

Standard deviation for denormalization.

Returns:

array-like

Denormalized output data.

feedforward(input_data)[source]

Performs the feedforward operation on the neural network.

Parameters:

input_dataarray-like

The input data to be fed into the neural network.

Returns:

array-like

The output of the neural network after the feedforward operation.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean, std_dev)[source]

Normalize the input data.

Parameters:

input_dataarray-like

Input data to be normalized.

meanfloat

Mean value for normalization.

std_devfloat

Standard deviation for normalization.

Returns:

array-like

Normalized input data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

model.fcrbfnn module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.fcrbfnn.FCRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Specification for the Fully Complex Transmittance Radial Basis Function Neural Network. This includes the feedforward, backpropagation, and adding layer methods specifics. This class derives from NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_crbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones_real>, gamma_initializer=<function opt_crbf_gamma>, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a layer to the neural network.

Parameters:

neuronsint

Number of neurons in the layer.

ishapeint, optional

Input shape for the layer.

oshapeint, optional

Output shape for the layer.

weights_initializerfunction, optional

Function to initialize the weights.

bias_initializerfunction, optional

Function to initialize the biases.

sigma_initializerfunction, optional

Function to initialize sigma values.

gamma_initializerfunction, optional

Function to initialize gamma values.

weights_ratefloat, optional

Learning rate for weights.

biases_ratefloat, optional

Learning rate for biases.

gamma_ratefloat, optional

Learning rate for gamma.

sigma_ratefloat, optional

Learning rate for sigma.

reg_strengthfloat, optional

Regularization strength.

lambda_initfloat, optional

Initial lambda value for regularization.

lr_decay_methodfunction, optional

Learning rate decay method.

lr_decay_ratefloat, optional

Learning rate decay rate.

lr_decay_stepsint, optional

Learning rate decay steps.

modulemodule, optional

Module for computation (e.g., numpy, cupy).

Returns:

None

backprop(y, y_pred, epoch)[source]

Performs the backpropagation operation on the neural network.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values from the neural network.

epochint

The current epoch number.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Performs the feedforward operation on the neural network.

Parameters:

input_dataarray-like

The input data to be fed into the neural network.

Returns:

array-like

The output of the neural network after the feedforward operation.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

model.ptrbfnnc module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.ptrbfnnc.PTRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Specification for the Deep Phase Transmittance Radial Basis Function Neural Network to be passed to the model in construction. This includes the feedforward, backpropagation, and adding layer methods specifics.

This class derives from NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_ptrbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones>, gamma_initializer=<function opt_ptrbf_gamma>, reg_strength=0.0, lambda_init=0.1, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, kernel_initializer=<function opt_ptrbf_gamma>, kernel_size=3, module=None, category=1, layer_type='Fully')[source]

Adds a layer to the neural network.

This method is responsible for appending a new layer to the neural network structure. The layer can be fully connected or convolutional, depending on the parameters provided.

Parameters:
  • neurons (int) – The number of neurons in the hidden layer. If ishape is different from zero and this is the first layer of the model, neurons represents the number of neurons in the first layer (i.e., the number of input features).

  • ishape (int, optional) – The number of neurons in the first layer (i.e., the number of input features). Default is 0.

  • oshape (int, optional) – The number of output neurons (shape of the output). If not provided, defaults to the number of neurons. Default is 0.

  • weights_initializer (function, optional) – The function used to initialize the layer’s weights. Default is init_func.opt_ptrbf_weights.

  • bias_initializer (function, optional) – The function used to initialize the layer’s biases. Default is init_func.zeros.

  • sigma_initializer (function, optional) – The function used to initialize the sigma parameter. Default is init_func.ones.

  • gamma_initializer (function, optional) – The function used to initialize the gamma parameter. Default is init_func.opt_ptrbf_gamma.

  • reg_strength (float, optional) – The strength of L2 regularization applied to the layer. Default is 0.0 (no regularization).

  • lambda_init (float, optional) – The initial value for the regularization term. Default is 0.1.

  • weights_rate (float, optional) – The learning rate applied to the weights during training. Default is 0.001.

  • biases_rate (float, optional) – The learning rate applied to the biases during training. Default is 0.001.

  • gamma_rate (float, optional) – The learning rate applied to the gamma parameter during training. Default is 0.01.

  • sigma_rate (float, optional) – The learning rate applied to the sigma parameter during training. Default is 0.01.

  • lr_decay_method (function, optional) – The method used for decaying the learning rate over time. Default is decay_func.none_decay.

  • lr_decay_rate (float, optional) – The rate at which the learning rate decays. Default is 0.0 (no decay).

  • lr_decay_steps (int, optional) – The number of steps after which the learning rate decays. Default is 1.

  • kernel_initializer (function, optional) – The function used to initialize the kernel for convolutional layers. Default is init_func.opt_ptrbf_gamma.

  • kernel_size (int, optional) – The size of the convolutional kernel. Default is 3.

  • module (object, optional) – The computation module used (e.g., NumPy or CuPy). If not provided, it is set during the initialization of the NeuralNetwork class. Default is None.

  • category (int, optional) – The type of convolution: 1 for transient and steady-state, 0 for steady-state only. Default is 1.

  • layer_type (str, optional) – The type of layer to add: “Fully” for fully connected layers, “Conv” for convolutional layers. Default is “Fully”.

Returns:

This method does not return any value; it modifies the network structure by appending a new layer.

Return type:

None

Notes

The layer is added to the self.layers list, which is a sequence of layers in the neural network. The parameters provided, such as initialization methods and learning rates, are specific to each layer.

backprop(y, y_pred, epoch)[source]

Performs the backpropagation operation on the neural network.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values from the neural network.

epochint

The current epoch number.

Returns:

array-like

The gradients of the loss function with respect to the network parameters.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean, std_dev)[source]

Denormalize the output data.

Parameters:

normalized_output_data (cupy/numpy.ndarray) – Normalized output data to be denormalized.

Returns:

Denormalized output data.

Return type:

cupy/numpy.ndarray

feedforward(x)[source]

Performs the feedforward operation on the neural network.

Parameters:

xarray-like

The input data to be fed into the neural network.

Returns:

array-like

The output of the neural network after the feedforward operation.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean, std_dev)[source]

Normalize the input data.

Parameters:

input_data (cupy/numpy.ndarray) – Input data to be normalized.

Returns:

Normalized input data.

Return type:

cupy/numpy.ndarray

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

model.rp_layer module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.rp_layer.Layer(ishape, neurons, oshape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, gamma_initializer=<function rbf_default>, sigma_initializer=<function ones>, activation=<function tanh>, reg_strength=0.0, lambda_init=0.1, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.0, sigma_rate=0.0, cvnn=1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, kernel_initializer=<function opt_ptrbf_weights>, kernel_size=3, module=None, category=1, layer_type='Fully')[source]

Bases: object

Specification for a layer to be passed to the Neural Network during construction. This includes a variety of parameters to configure each layer based on its activation type.

model.rp_nn module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License: This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.rp_nn.NeuralNetwork(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: object

Abstract base class for wrapping all neural network functionality from RosenPy. This is a superclass.

accuracy(y, y_pred)[source]

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer()[source]
convert_data(data)[source]

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)[source]

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)[source]

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()[source]

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)[source]

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)[source]

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)[source]

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)[source]

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

model.rp_optimizer module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.rp_optimizer.AMSGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

AMSGrad optimizer.

This class implements the AMSGrad optimization algorithm, a variant of Adam that improves convergence in certain cases by keeping track of the maximum past squared gradient.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the AMSGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.AdaGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the AdaGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.Adam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Adam optimizer.

This class implements the Adam optimization algorithm, which is an adaptive learning rate optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the Adam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.Adamax(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the Adamax optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVAMSGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Updates the parameters using the complex-valued SAMSGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters of the neural network based on the gradients.

This is a placeholder method that should be implemented by subclasses.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates.

vttuple

The second moment estimates.

uttuple

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVAdaGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued AdaGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVAdam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Complex-Valued Adam optimizer.

This class implements the complex-valued version of the Adam optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued Adam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVAdamax(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued Adamax optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVNadam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Nadam

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued Nadam optimizer.

Parameters:

parameterslist of arrays

The parameters of the neural network.

gradientslist of arrays

The gradients of the loss function with respect to the parameters.

learning_ratefloat

The learning rate for updating the parameters.

epochint

The current epoch number.

mtlist of arrays

The first moment estimates.

vtlist of arrays

The second moment estimates.

utlist of arrays

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.CVRMSprop(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued RMSprop optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.GradientDescent(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Gradient Descent optimizer.

This class implements the standard gradient descent optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the gradient descent optimizer.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates (not used in this optimizer).

vttuple

The second moment estimates (not used in this optimizer).

uttuple

The third moment estimates (not used in this optimizer).

Returns:

tuple

The updated parameters.

class model.rp_optimizer.Nadam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the Nadam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.Optimizer(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: object

Base class for all optimizers used in the neural network.

This class defines common parameters and methods that can be used by all derived optimizers.

set_module(xp)[source]

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters of the neural network based on the gradients.

This is a placeholder method that should be implemented by subclasses.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates.

vttuple

The second moment estimates.

uttuple

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.RMSprop(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the RMSprop optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.rp_optimizer.SAMSGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the SAMSGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

model.scffnn module

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.scffnn.SCFFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

The Split Complex FeedForward Neural Network (SCFFNN) class.

This class provides the specifications and methods to construct, train, and utilize a split-complex feedforward neural network, including feedforward, backpropagation, and layer addition functionality.

This class inherits from the base NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, activation=<function tanh>, weights_rate=0.001, biases_rate=0.001, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a new layer to the split-complex neural network.

Parameters:

neuronsint

The number of neurons in the new layer.

ishapeint, optional

The input shape for the layer. Defaults to 0.

weights_initializerfunction, optional

Function used to initialize the weights. Defaults to random_normal.

bias_initializerfunction, optional

Function used to initialize the biases. Defaults to random_normal.

activationfunction, optional

Activation function for the layer. Defaults to tanh.

weights_ratefloat, optional

Learning rate for the weights. Defaults to 0.001.

biases_ratefloat, optional

Learning rate for the biases. Defaults to 0.001.

reg_strengthfloat, optional

Strength of L2 regularization. Defaults to 0.0.

lambda_initfloat, optional

Initial lambda value for regularization. Defaults to 0.1.

lr_decay_methodfunction, optional

Method for decaying the learning rate. Defaults to none_decay.

lr_decay_ratefloat, optional

Rate at which learning rate decays. Defaults to 0.0.

lr_decay_stepsint, optional

Number of steps after which the learning rate decays. Defaults to 1.

moduleobject, optional

Computational module used for the layer (e.g., NumPy or CuPy). Defaults to None.

backprop(y, y_pred, epoch)[source]

Executes the backpropagation operation on the neural network.

Parameters:

yarray-like

True labels or target values.

y_predarray-like

Predicted values from the neural network.

epochint

The current epoch number during training.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Executes the feedforward operation on the neural network.

Parameters:

input_dataarray-like

Input data to be processed by the neural network.

Returns:

array-like

The output of the neural network after performing feedforward.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

Module contents

RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.

License

This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.

RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.

class model.AMSGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

AMSGrad optimizer.

This class implements the AMSGrad optimization algorithm, a variant of Adam that improves convergence in certain cases by keeping track of the maximum past squared gradient.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the AMSGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.Adam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Adam optimizer.

This class implements the Adam optimization algorithm, which is an adaptive learning rate optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the Adam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.CVAMSGrad(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Updates the parameters using the complex-valued SAMSGrad optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters of the neural network based on the gradients.

This is a placeholder method that should be implemented by subclasses.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates.

vttuple

The second moment estimates.

uttuple

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.CVAdam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Complex-Valued Adam optimizer.

This class implements the complex-valued version of the Adam optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued Adam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.CVFFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Complex Valued FeedForward Neural Network (CVFFNN) class.

This class handles feedforward, backpropagation, and layer addition operations for a complex-valued neural network.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, activation=<function tanh>, weights_rate=0.001, biases_rate=0.001, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Add a layer to the neural network.

Parameters:
  • neurons (int) – The number of neurons in the layer.

  • ishape (int, optional) – The input shape for the layer. Defaults to 0.

  • weights_initializer (function, optional) – Function to initialize the weights. Defaults to random_normal.

  • bias_initializer (function, optional) – Function to initialize the biases. Defaults to random_normal.

  • activation (function, optional) – Activation function to use. Defaults to tanh.

  • weights_rate (float, optional) – The learning rate for the weights. Defaults to 0.001.

  • biases_rate (float, optional) – The learning rate for the biases. Defaults to 0.001.

  • reg_strength (float, optional) – The regularization strength. Defaults to 0.0.

  • lambda_init (float, optional) – The initial lambda for regularization. Defaults to 0.1.

  • lr_decay_method (function, optional) – Method for decaying the learning rate. Defaults to none_decay.

  • lr_decay_rate (float, optional) – The rate at which the learning rate decays. Defaults to 0.0.

  • lr_decay_steps (int, optional) – The number of steps after which the learning rate decays. Defaults to 1.

  • module (object, optional) – The module (e.g., NumPy or CuPy) to be used for computation. Defaults to None.

backprop(y, y_pred, epoch)[source]

Perform the backpropagation operation on the neural network.

Parameters:
  • y (numpy.array or cupy.array) – The true target values.

  • y_pred (numpy.array or cupy.array) – The predicted values from the network.

  • epoch (int) – The current training epoch.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Perform the feedforward operation on the neural network.

Parameters:

input_data (numpy.array or cupy.array) – The input data to feed into the network.

Returns:

The output of the final layer after feedforward.

Return type:

numpy.array or cupy.array

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

class model.CVNadam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Nadam

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the complex-valued Nadam optimizer.

Parameters:

parameterslist of arrays

The parameters of the neural network.

gradientslist of arrays

The gradients of the loss function with respect to the parameters.

learning_ratefloat

The learning rate for updating the parameters.

epochint

The current epoch number.

mtlist of arrays

The first moment estimates.

vtlist of arrays

The second moment estimates.

utlist of arrays

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.CVRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Specification for the Complex Valued Radial Basis Function Neural Network. This includes the feedforward, backpropagation, and adding layer methods specifics. This class derives from NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_crbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones_real>, gamma_initializer=<function opt_crbf_gamma>, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a layer to the neural network.

Parameters:

neuronsint

Number of neurons in the layer.

ishapeint, optional

Input shape for the layer.

oshapeint, optional

Output shape for the layer.

weights_initializerfunction, optional

Function to initialize the weights.

bias_initializerfunction, optional

Function to initialize the biases.

sigma_initializerfunction, optional

Function to initialize sigma values.

gamma_initializerfunction, optional

Function to initialize gamma values.

weights_ratefloat, optional

Learning rate for weights.

biases_ratefloat, optional

Learning rate for biases.

gamma_ratefloat, optional

Learning rate for gamma.

sigma_ratefloat, optional

Learning rate for sigma.

reg_strengthfloat, optional

Regularization strength.

lambda_initfloat, optional

Initial lambda value for regularization.

lr_decay_methodfunction, optional

Learning rate decay method.

lr_decay_ratefloat, optional

Learning rate decay rate.

lr_decay_stepsint, optional

Learning rate decay steps.

modulemodule, optional

Module for computation (e.g., numpy, cupy).

Returns:

None

backprop(y, y_pred, epoch)[source]

Performs the backpropagation operation on the neural network.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values from the neural network.

epochint

The current epoch number.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean, std_dev)[source]

Denormalize the output data.

Parameters:

normalized_output_dataarray-like

Normalized output data.

meanfloat

Mean value for denormalization.

std_devfloat

Standard deviation for denormalization.

Returns:

array-like

Denormalized output data.

feedforward(input_data)[source]

Performs the feedforward operation on the neural network.

Parameters:

input_dataarray-like

The input data to be fed into the neural network.

Returns:

array-like

The output of the neural network after the feedforward operation.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean, std_dev)[source]

Normalize the input data.

Parameters:

input_dataarray-like

Input data to be normalized.

meanfloat

Mean value for normalization.

std_devfloat

Standard deviation for normalization.

Returns:

array-like

Normalized input data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

class model.FCRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

Specification for the Fully Complex Transmittance Radial Basis Function Neural Network. This includes the feedforward, backpropagation, and adding layer methods specifics. This class derives from NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_crbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones_real>, gamma_initializer=<function opt_crbf_gamma>, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a layer to the neural network.

Parameters:

neuronsint

Number of neurons in the layer.

ishapeint, optional

Input shape for the layer.

oshapeint, optional

Output shape for the layer.

weights_initializerfunction, optional

Function to initialize the weights.

bias_initializerfunction, optional

Function to initialize the biases.

sigma_initializerfunction, optional

Function to initialize sigma values.

gamma_initializerfunction, optional

Function to initialize gamma values.

weights_ratefloat, optional

Learning rate for weights.

biases_ratefloat, optional

Learning rate for biases.

gamma_ratefloat, optional

Learning rate for gamma.

sigma_ratefloat, optional

Learning rate for sigma.

reg_strengthfloat, optional

Regularization strength.

lambda_initfloat, optional

Initial lambda value for regularization.

lr_decay_methodfunction, optional

Learning rate decay method.

lr_decay_ratefloat, optional

Learning rate decay rate.

lr_decay_stepsint, optional

Learning rate decay steps.

modulemodule, optional

Module for computation (e.g., numpy, cupy).

Returns:

None

backprop(y, y_pred, epoch)[source]

Performs the backpropagation operation on the neural network.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values from the neural network.

epochint

The current epoch number.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Performs the feedforward operation on the neural network.

Parameters:

input_dataarray-like

The input data to be fed into the neural network.

Returns:

array-like

The output of the neural network after the feedforward operation.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

class model.GradientDescent(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

Gradient Descent optimizer.

This class implements the standard gradient descent optimization algorithm.

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the gradient descent optimizer.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates (not used in this optimizer).

vttuple

The second moment estimates (not used in this optimizer).

uttuple

The third moment estimates (not used in this optimizer).

Returns:

tuple

The updated parameters.

class model.Layer(ishape, neurons, oshape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, gamma_initializer=<function rbf_default>, sigma_initializer=<function ones>, activation=<function tanh>, reg_strength=0.0, lambda_init=0.1, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.0, sigma_rate=0.0, cvnn=1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, kernel_initializer=<function opt_ptrbf_weights>, kernel_size=3, module=None, category=1, layer_type='Fully')[source]

Bases: object

Specification for a layer to be passed to the Neural Network during construction. This includes a variety of parameters to configure each layer based on its activation type.

class model.Nadam(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: Optimizer

set_module(xp)

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters using the Nadam optimizer.

Parameters:

Same as the parent class.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.NeuralNetwork(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: object

Abstract base class for wrapping all neural network functionality from RosenPy. This is a superclass.

accuracy(y, y_pred)[source]

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer()[source]
convert_data(data)[source]

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)[source]

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)[source]

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()[source]

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)[source]

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)[source]

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)[source]

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)[source]

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.

class model.Optimizer(beta=100, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]

Bases: object

Base class for all optimizers used in the neural network.

This class defines common parameters and methods that can be used by all derived optimizers.

set_module(xp)[source]

Sets the backend module (NumPy or CuPy) for matrix operations.

Parameters:

xpmodule

The backend module (NumPy or CuPy).

update_parameters(parameters, gradients, learning_rate, epoch, mt, vt, ut)[source]

Updates the parameters of the neural network based on the gradients.

This is a placeholder method that should be implemented by subclasses.

Parameters:

parameterstuple

The parameters of the neural network.

gradientstuple

The gradients of the loss function with respect to the parameters.

learning_ratetuple

The learning rates for updating the parameters.

epochint

The current epoch number.

mttuple

The first moment estimates.

vttuple

The second moment estimates.

uttuple

The third moment estimates.

Returns:

tuple

The updated parameters along with the updated moment estimates.

class model.SCFFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]

Bases: NeuralNetwork

The Split Complex FeedForward Neural Network (SCFFNN) class.

This class provides the specifications and methods to construct, train, and utilize a split-complex feedforward neural network, including feedforward, backpropagation, and layer addition functionality.

This class inherits from the base NeuralNetwork class.

accuracy(y, y_pred)

Computes the accuracy of the predictions.

Parameters:

yarray-like

The true labels or target values.

y_predarray-like

The predicted values.

Returns:

float

The accuracy of the predictions as a percentage.

add_layer(neurons, ishape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, activation=<function tanh>, weights_rate=0.001, biases_rate=0.001, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]

Adds a new layer to the split-complex neural network.

Parameters:

neuronsint

The number of neurons in the new layer.

ishapeint, optional

The input shape for the layer. Defaults to 0.

weights_initializerfunction, optional

Function used to initialize the weights. Defaults to random_normal.

bias_initializerfunction, optional

Function used to initialize the biases. Defaults to random_normal.

activationfunction, optional

Activation function for the layer. Defaults to tanh.

weights_ratefloat, optional

Learning rate for the weights. Defaults to 0.001.

biases_ratefloat, optional

Learning rate for the biases. Defaults to 0.001.

reg_strengthfloat, optional

Strength of L2 regularization. Defaults to 0.0.

lambda_initfloat, optional

Initial lambda value for regularization. Defaults to 0.1.

lr_decay_methodfunction, optional

Method for decaying the learning rate. Defaults to none_decay.

lr_decay_ratefloat, optional

Rate at which learning rate decays. Defaults to 0.0.

lr_decay_stepsint, optional

Number of steps after which the learning rate decays. Defaults to 1.

moduleobject, optional

Computational module used for the layer (e.g., NumPy or CuPy). Defaults to None.

backprop(y, y_pred, epoch)[source]

Executes the backpropagation operation on the neural network.

Parameters:

yarray-like

True labels or target values.

y_predarray-like

Predicted values from the neural network.

epochint

The current epoch number during training.

convert_data(data)

Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).

Parameters:

dataarray-like

The input data.

Returns:

array-like

The converted input data.

denormalize_outputs(normalized_output_data, mean=0, std_dev=0)

Denormalizes the output data based on the provided mean and standard deviation.

Parameters:

normalized_output_dataarray-like

The data to be denormalized.

meanfloat, optional

The mean used for normalization. Default is 0.

std_devfloat, optional

The standard deviation used for normalization. Default is 0.

Returns:

array-like

The denormalized data.

feedforward(input_data)[source]

Executes the feedforward operation on the neural network.

Parameters:

input_dataarray-like

Input data to be processed by the neural network.

Returns:

array-like

The output of the neural network after performing feedforward.

fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)

Trains the neural network on the provided training data.

Parameters:

x_trainarray-like

The input training data.

y_trainarray-like

The target training data.

x_valarray-like, optional

The input validation data. Default is None.

y_valarray-like, optional

The target validation data. Default is None.

epochsint, optional

The number of training epochs. Default is 100.

verboseint, optional

Controls the verbosity of the training process. Default is 10.

batch_genfunction, optional

The batch generation function to use during training. Default is batch_gen_func.batch_sequential.

batch_sizeint, optional

The batch size to use during training. Default is 1.

optimizerOptimizer, optional

The optimizer to use during training. Default is GradientDescent with specified parameters.

get_history()

Returns the training history of the neural network.

Returns:

dict

A dictionary containing the training history.

normalize_data(input_data, mean=0, std_dev=0)

Normalizes the input data based on the provided mean and standard deviation.

Parameters:

input_dataarray-like

The data to be normalized.

meanfloat, optional

The mean for normalization. Default is 0.

std_devfloat, optional

The standard deviation for normalization. Default is 0.

Returns:

array-like

The normalized data.

predict(x, status=1)

Predicts the output for the given input data.

Parameters:

xarray-like

The input data for prediction.

Returns:

array-like

The predicted output for the input data.

update_learning_rate(epoch)

Updates the learning rates of all layers based on the current epoch.

Parameters:

epochint

The current epoch number.

verify_input(data)

Verifies the input data type for optimal performance of the RosenPY framework.

Parameters:

dataarray-like

The input data.