model.cvffnn module
RosenPy: An Open Source Python Framework for Complex-Valued Neural Networks. Copyright © A. A. Cruz, K. S. Mayer, D. S. Arantes.
License
This file is part of RosenPy. RosenPy is an open source framework distributed under the terms of the GNU General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional information on license terms, please open the Readme.md file.
RosenPy is distributed in the hope that it will be useful to every user, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with RosenPy. If not, see <http://www.gnu.org/licenses/>.
- class model.cvffnn.CVFFNN(cost_func=<function mse>, patience=inf, gpu_enable=False)[source]
Bases:
NeuralNetworkComplex Valued FeedForward Neural Network (CVFFNN) class.
This class handles feedforward, backpropagation, and layer addition operations for a complex-valued neural network.
- accuracy(y, y_pred)
Computes the accuracy of the predictions.
Parameters:
- yarray-like
The true labels or target values.
- y_predarray-like
The predicted values.
Returns:
- float
The accuracy of the predictions as a percentage.
- add_layer(neurons, ishape=0, weights_initializer=<function random_normal>, bias_initializer=<function random_normal>, activation=<function tanh>, weights_rate=0.001, biases_rate=0.001, reg_strength=0.0, lambda_init=0.1, lr_decay_method=<function none_decay>, lr_decay_rate=0.0, lr_decay_steps=1, module=None)[source]
Add a layer to the neural network.
- Parameters:
neurons (int) – The number of neurons in the layer.
ishape (int, optional) – The input shape for the layer. Defaults to 0.
weights_initializer (function, optional) – Function to initialize the weights. Defaults to random_normal.
bias_initializer (function, optional) – Function to initialize the biases. Defaults to random_normal.
activation (function, optional) – Activation function to use. Defaults to tanh.
weights_rate (float, optional) – The learning rate for the weights. Defaults to 0.001.
biases_rate (float, optional) – The learning rate for the biases. Defaults to 0.001.
reg_strength (float, optional) – The regularization strength. Defaults to 0.0.
lambda_init (float, optional) – The initial lambda for regularization. Defaults to 0.1.
lr_decay_method (function, optional) – Method for decaying the learning rate. Defaults to none_decay.
lr_decay_rate (float, optional) – The rate at which the learning rate decays. Defaults to 0.0.
lr_decay_steps (int, optional) – The number of steps after which the learning rate decays. Defaults to 1.
module (object, optional) – The module (e.g., NumPy or CuPy) to be used for computation. Defaults to None.
- backprop(y, y_pred, epoch)[source]
Perform the backpropagation operation on the neural network.
- Parameters:
y (numpy.array or cupy.array) – The true target values.
y_pred (numpy.array or cupy.array) – The predicted values from the network.
epoch (int) – The current training epoch.
- convert_data(data)
Converts the input data to the appropriate format for the current backend (NUMPY or CUPY).
Parameters:
- dataarray-like
The input data.
Returns:
- array-like
The converted input data.
- denormalize_outputs(normalized_output_data, mean=0, std_dev=0)
Denormalizes the output data based on the provided mean and standard deviation.
Parameters:
- normalized_output_dataarray-like
The data to be denormalized.
- meanfloat, optional
The mean used for normalization. Default is 0.
- std_devfloat, optional
The standard deviation used for normalization. Default is 0.
Returns:
- array-like
The denormalized data.
- feedforward(input_data)[source]
Perform the feedforward operation on the neural network.
- Parameters:
input_data (numpy.array or cupy.array) – The input data to feed into the network.
- Returns:
The output of the final layer after feedforward.
- Return type:
numpy.array or cupy.array
- fit(x_train, y_train, x_val=None, y_val=None, epochs=100, verbose=10, batch_gen=<function batch_sequential>, batch_size=1, optimizer=<model.rp_optimizer.GradientDescent object>)
Trains the neural network on the provided training data.
Parameters:
- x_trainarray-like
The input training data.
- y_trainarray-like
The target training data.
- x_valarray-like, optional
The input validation data. Default is None.
- y_valarray-like, optional
The target validation data. Default is None.
- epochsint, optional
The number of training epochs. Default is 100.
- verboseint, optional
Controls the verbosity of the training process. Default is 10.
- batch_genfunction, optional
The batch generation function to use during training. Default is batch_gen_func.batch_sequential.
- batch_sizeint, optional
The batch size to use during training. Default is 1.
- optimizerOptimizer, optional
The optimizer to use during training. Default is GradientDescent with specified parameters.
- get_history()
Returns the training history of the neural network.
Returns:
- dict
A dictionary containing the training history.
- normalize_data(input_data, mean=0, std_dev=0)
Normalizes the input data based on the provided mean and standard deviation.
Parameters:
- input_dataarray-like
The data to be normalized.
- meanfloat, optional
The mean for normalization. Default is 0.
- std_devfloat, optional
The standard deviation for normalization. Default is 0.
Returns:
- array-like
The normalized data.
- predict(x, status=1)
Predicts the output for the given input data.
Parameters:
- xarray-like
The input data for prediction.
Returns:
- array-like
The predicted output for the input data.