Surface Fitting using Neural Networks

Version 1.2.3 (1.36 MB) by S0852306
Solve N-dimensional surface fitting problems with extremely high accuracy.
321 Downloads
Updated 13 Sep 2023

View License

Neural Networks for Nonlinear Regression & Classification
Neural networks are universal function approximator, which means that given enough parameters, a neural net can approximate any multivariable continuous function to any desired level of accuracy.
Features
  • Easy to use.
% Network Set Up
LayerStruct=[InputDimension,10,10,10,OutputDimension];
NN=Initialization(LayerStruct);
% Train Network
option.MaxIteration=600;
NN=OptimizationSolver(data,label,NN,option);
  • Arbitrary dimension & high-precision function approximation.
Application
Approximate highly nonlinear function Find pattern form noisy data
Handwritten digit recognition Smoothing out data and estimate derivate
Quick Start
data=linspace(0,2*pi,1000); label=(data).*sin(data)+cos(3*data);
LayerStruct=[1,7,7,7,1];
NN=Initialization(LayerStruct);
option.MaxIteration=600;
NN=OptimizationSolver(data,label,NN,option);
Predict=NN.Evaluate(data);
Standard Template
  • Two standard templates available for users to quickly call the main functions: "SimplifiedWorkflow.m" and "CustomizableWorkflow.m" The purpose of "SimplifiedWorkflow.m" is to assist beginners in quickly getting started, while the other template provides more flexibility.
Instruction and Example
  • For detailed instructions on how to use the package, please refer to "GeneralGuide.mlx". (Fit the Logo of MATLAB using Neural Net)
  • "DigitRecognition.mlx" use a simple MLP architecture and achieves an accuracy of 97.6% on the testing set of the "MNIST" handwritten digit recognition dataset.
  • "CurveFittingFromNoisyData.mlx" demonstrates how to use neural nets to fit noisy data and estimate derivatives.
  • "CustomizableWorkflow.m" provide standard workflow for following multivariable function approximation.
  • "MathModel.mlx" explains the mathematical model of neural nets and provides a step-by-step numerical example that may help users understand neural nets more easily.
Customizing the model and solver parameters.
  • For detailed instructions, please refer to "GeneralGuide.mlx" in example page.
NN.Cost='MSE'; % specify cost function
NN.InputAutoScaling='on'; % turn on normalization if your data scale badly
NN.ActivationFunction='gaussian'; % specify nonlinear activation
LayerStruct=[InputDimension,10,10,10,OutputDimension]; % define the size of network
NN=Initialization(LayerStruct,NN); % initialize weights of neural net
%% First-stage optimization
option.MaxIteration=100;
option.Solver='ADAM';
option.BatchSize=500;
NN=OptimizationSolver(data,label,NN,option);
%% Second-stage optimization
option.MaxIteration=400;
option.Solver='BFGS';
NN=OptimizationSolver(data,label,NN,option);
Report=FittingReport(data,label,NN); % Quantify fitting performance
Tips and considerations for training neural networks
  • Please refer to the "Tips for Training Neural Networks.mlx" which provides detailed yet straightforward instructions to easily address the mentioned issues.
  • Ensure that the data (i.e., input x) is distributed in similar magnitude. otherwise, it can make neural network training challenging. Therefore, it is recommended that users always preprocess their data (i.e., perform normalization) before starting the optimization process. If you are unfamiliar with preprocessing methods, the package also provides basic algorithms that should be sufficient for most situations.
  • Make sure the standard deviation of the labels is not too small, as it can also make it difficult to train the neural network. The package includes built-in functions to handle this situation.
Optimization Solvers
  1. Stochastic Gradient Descents (SGD)
  2. Stochastic Gradient Descents with Momentum (SGDM)
  3. Root Mean Square Propagation (RMSprop)
  4. Adaptive Momentum Estimation (ADAM)
  5. Adaptive Momentum Estimation with Weight decay (AdamW)
  6. Broyden-Fletcher-Goldfarb-Shanno Method (BFGS)
Mathematical model of neural nets
  • W,b are the weights matrices and bias vectors of neural nets.
  • d is the depth of neural nets.
  • "sigma" is a point-wise non-linear function, such as tanh.
  • For more detail, please refer to "MathModel.mlx".
Reference
  1. Numerical Optimization, Nocedal & Wright.
  2. Practical Quasi-Newton Methods for Training Deep Neural Networks, Goldfarb, et al.
  3. Kronecker-factored Quasi-Newton Methods for Deep Learning, Yi Ren, et al.

Cite As

S0852306 (2024). Surface Fitting using Neural Networks (https://www.mathworks.com/matlabcentral/fileexchange/129589-surface-fitting-using-neural-networks), MATLAB Central File Exchange. Retrieved .

MATLAB Release Compatibility
Created with R2020a
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Version Published Release Notes
1.2.3

minor update

1.2.2

Add a weighted least-squares option, see "WeightedListSquare.m".

1.2.1

Explain the mathematical model of neural nets using a live script.

1.2.0

Solver update: AdamW, avoiding overfitting by weight decay.

1.1.9

Add MAE cost for robust surface fitting.

1.1.8

Minor update.

1.1.7

Solver minor update

1.1.6

1. Handwritten digit recognition (MNIST).
2. Bug fixed.

1.1.5

1. Add cross-entropy cost for classification problems.
2. ReLU bug fixed

1.1.4

1. Add Cross-Entropy Cost for Classification Task.
2. ReLU bug fixed.

1.1.3

New Solver 'RMSprop'

1.1.2

Minor Bug Fixed.
(Previous Version) There was an error in calculating the gradient for the bias in the last layer, but strangely, it didn't have a significant impact on the training results.

1.1.1

Solver Improvement.

1.1.0

Improve efficiency.
Bug fixed.

1.0.9

bug fixed

1.0.8

autoscaling
automatic derivatie

1.0.7

Added Autoscaling Function
Automatic Derivate Calculation (for x, i.e. input, not parameters of NN)
Simplified Command

1.0.6

Added autoscaling capability.
Added automatic derivate function for x.

1.0.5

guided

1.0.3

user guide

1.0.2

User Guide

1.0.1

Added User Guide. ("Guide.mlx")

1.0.0