model.ptrbfnnc module
RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.
License
This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.
RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.
- class model.ptrbfnnc.PTRBFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]
Bases:
NeuralNetworkSpecification for the Deep Phase Transmittance Radial Basis Function Neural Network to be passed to the model in construction. This includes the feedforward, backpropagation, and adding layer methods specifics.
This class derives from NeuralNetwork class.
- accuracy(y, y_pred)
Computes the accuracy of the predictions.
Parameters:
- yarray-like
The true labels or target values.
- y_predarray-like
The predicted values.
Returns:
- float
The accuracy of the predictions as a percentage.
- add_layer(neurons, ishape=0, oshape=0, weights_initializer=<function opt_ptrbf_weights>, bias_initializer=<function zeros>, sigma_initializer=<function ones>, gamma_initializer=<function opt_ptrbf_gamma>, reg_strength=0.0, lambda_init=0.1, weights_rate=0.001, biases_rate=0.001, gamma_rate=0.01, sigma_rate=0.01, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, kernel_initializer=<function opt_ptrbf_gamma>, kernel_size=3, module=None, category=1, layer_type='Fully')[source]
Adds a layer to the neural network.
This method is responsible for appending a new layer to the neural network structure. The layer can be fully connected or convolutional, depending on the parameters provided.
- Parameters:
neurons (int) – The number of neurons in the hidden layer. If ishape is different from zero and this is the first layer of the model, neurons represents the number of neurons in the first layer (i.e., the number of input features).
ishape (int, optional) – The number of neurons in the first layer (i.e., the number of input features). Default is 0.
oshape (int, optional) – The number of output neurons (shape of the output). If not provided, defaults to the number of neurons. Default is 0.
weights_initializer (function, optional) – The function used to initialize the layer’s weights. Default is init_func.opt_ptrbf_weights.
bias_initializer (function, optional) – The function used to initialize the layer’s biases. Default is init_func.zeros.
sigma_initializer (function, optional) – The function used to initialize the sigma parameter. Default is init_func.ones.
gamma_initializer (function, optional) – The function used to initialize the gamma parameter. Default is init_func.opt_ptrbf_gamma.
reg_strength (float, optional) – The strength of L2 regularization applied to the layer. Default is 0.0 (no regularization).
lambda_init (float, optional) – The initial value for the regularization term. Default is 0.1.
weights_rate (float, optional) – The learning rate applied to the weights during training. Default is 0.001.
biases_rate (float, optional) – The learning rate applied to the biases during training. Default is 0.001.
gamma_rate (float, optional) – The learning rate applied to the gamma parameter during training. Default is 0.01.
sigma_rate (float, optional) – The learning rate applied to the sigma parameter during training. Default is 0.01.
lr_decay_method (function, optional) – The method used for decaying the learning rate over time. Default is decay_func.none_decay.
lr_decay_rate (float, optional) – The rate at which the learning rate decays. Default is 0.0 (no decay).
lr_decay_steps (int, optional) – The number of steps after which the learning rate decays. Default is 1.
kernel_initializer (function, optional) – The function used to initialize the kernel for convolutional layers. Default is init_func.opt_ptrbf_gamma.
kernel_size (int, optional) – The size of the convolutional kernel. Default is 3.
module (object, optional) – The computation module used (e.g., NumPy or CuPy). If not provided, it is set during the initialization of the NeuralNetwork class. Default is None.
category (int, optional) – The type of convolution: 1 for transient and steady-state, 0 for steady-state only. Default is 1.
layer_type (str, optional) – The type of layer to add: “Fully” for fully connected layers, “Conv” for convolutional layers. Default is “Fully”.
- Returns:
This method does not return any value; it modifies the network structure by appending a new layer.
- Return type:
None
Notes
The layer is added to the self.layers list, which is a sequence of layers in the neural network. The parameters provided, such as initialization methods and learning rates, are specific to each layer.
- backprop(y, y_pred, epoch)[source]
Performs the backpropagation operation on the neural network.
Parameters:
- yarray-like
The true labels or target values.
- y_predarray-like
The predicted values from the neural network.
- epochint
The current epoch number.
Returns:
- array-like
The gradients of the loss function with respect to the network parameters.
- convert_data(data)
Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).
Parameters:
- dataarray-like
The input data.
Returns:
- array-like
The converted input data.
- denormalize_outputs(normalized_output_data, mean, std_dev)[source]
Denormalize the output data.
- Parameters:
normalized_output_data (cupy/numpy.ndarray) – Normalized output data to be denormalized.
- Returns:
Denormalized output data.
- Return type:
cupy/numpy.ndarray
- feedforward(x)[source]
Performs the feedforward operation on the neural network.
Parameters:
- xarray-like
The input data to be fed into the neural network.
Returns:
- array-like
The output of the neural network after the feedforward operation.
- fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)
Trains the neural network on the provided training data.
Parameters:
- x_trainarray-like
The input training data.
- y_trainarray-like
The target training data.
- x_valarray-like, optional
The input validation data. Default is None.
- y_valarray-like, optional
The target validation data. Default is None.
- epochsint, optional
The number of training epochs. Default is 100.
- verboseint, optional
Controls the verbosity of the training process. Default is 10.
- batch_genfunction, optional
The batch generation function to use during training. Default is batch_gen_func.batch_sequential.
- batch_sizeint, optional
The batch size to use during training. Default is 1.
- optimizerOptimizer, optional
The optimizer to use during training. Default is GradientDescent with specified parameters.
- get_history()
Returns the training history of the neural network.
Returns:
- dict
A dictionary containing the training history.
- normalize_data(input_data, mean, std_dev)[source]
Normalize the input data.
- Parameters:
input_data (cupy/numpy.ndarray) – Input data to be normalized.
- Returns:
Normalized input data.
- Return type:
cupy/numpy.ndarray
- predict(x, status=1)
Predicts the output for the given input data.
Parameters:
- xarray-like
The input data for prediction.
Returns:
- array-like
The predicted output for the input data.