API Reference
activate(l::FFNNLayer)
¶
Activation of a Neural Network layer
ceerror(output, target)
¶
Cross-entropy error between the output of the network and a target
classerror(net::FFNNet, inputs::Array{Array{Float64, 1}, 1}, outputs::Array{Array{Float64, 1}, 1})
¶
Classification error of the network in this sample (consisting of inputs
and
outputs
). The error is measured counting the ammount of misclassified inputs.
Arguments
net
(FFNNet
): Feedforward Neural Networkinputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This have K elements, K being the number of examples in this dataset. Each input vector must have I elements, where I is the input size ofnet
.outputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This must also have K elements, K being the number of examples in this dataset. Each output vector must have O elements, where O is the size of the output layer ofnet
.
Details
This function assumes that a classification of a vector is the index of the maximum value inside that vector. For example, the vector [0,1,0]
is classified as 2. This function takes the classification of the output of the network, and compares to the classification of the example's output, counting the number of mismatches.
Therefore, it's natural to use softmax
activation on the last layer of net
and one-hot encoding on the examples' outputs, but it's not mandatory.
der(f::Function)
¶
Derivative of a function
meanerror(net::FFNNet, inputs::Array{Array{Float64, 1}, 1}, outputs::Array{Array{Float64, 1}, 1})
¶
Mean error of the network in this sample (consisting of inputs
and outputs
). The error is measured using cost
function.
Arguments
net
(FFNNet
): Feedforward Neural Networkinputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This have K elements, K being the number of examples in this dataset. Each input vector must have I elements, where I is the input size ofnet
.outputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This must also have K elements, K being the number of examples in this dataset. Each output vector must have O elements, where O is the size of the output layer ofnet
.
Keyword Arguments
cost
(Function
,quaderror
by default): Cost function to be used as error measure.
propagate(net::FFNNet, x::Array{Float64, 1})
¶
Propagate an input x
through the network net
and return the output
Arguments
net
(FFNNet
): A neural network that will process the input and give the outputx
(Vector{Float64}
): The input vector. This must be of the same size as the input size ofnet
.
Returns
The output of the network (Vector{Float64}
). This is simply the activation of the last layer of the network after forwardpropagating the input.
quaderror(output, target)
¶
Quadratic error between the output of the network and a target
train!(net::FFNNet, inputs::Array{Array{Float64, 1}, 1}, outputs::Array{Array{Float64, 1}, 1})
¶
Train the Neural Network using the examples provided in inputs
and outputs
.
Arguments
net
(FFNNet
): Feedforward Neural Network to be trained.inputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This have K elements, K being the number of examples in this dataset. Each input vector must have I elements, where I is the input size ofnet
.outputs
(Vector{Vector{Float64}}
): Vector containing input examples (each one is a vector). This must also have K elements, K being the number of examples in this dataset. Each output vector must have O elements, where O is the size of the output layer ofnet
.
Keyword Arguments
α
(Real
, 0.5 by default): Learning Rate.η
(Real
, 0.1 by default): Momentum Rate.epochs
(Int
, 1 by default): Number of iterations of the learning algorithm on this dataset.batchsize
(Int
, 1 by default): Size of the batch used by the algorithm (1 is simply the default stochastic gradient descent).cost
(Function
,quaderror
by default): Cost function to be minimized by the learning algorithm.
update!(l::FFNNLayer, x::Array{Float64, 1})
¶
Update the internal values of the neurons of a layer
FFNNLayer
¶
Type representing a Neural Network 1-D layer with N
neurons and, eventually, a bias unit.
Fields
neurons
(Vector{Float64}
): Vector with each neuron and, ifbias = true
, a bias unit (index 1)activation
(Function
): Activation functionbias
(Bool
): True if there is a bias unit in this layer
FFNNet
¶
Type representing a Neural Network.
Fields
layers
(Vector{FFNNLayer}
): Vector containing each layer of the networkweights
(Vector{Matrix{Float64}}
): Vector containing the weight matrices between layersinputsize
(Int
): Input size accepted by this network
derivatives
¶
Dictionary associating functions with their derivatives
call(::Type{FFNNLayer}, n::Integer)
¶
Construct a 1-D layer of a Neural Network with n
neurons and, eventually, a bias unit. The layer has tanh
as activation function
Arguments
n
(Int
): Number of neurons in this layer (not counting the eventual bias unit)
Keyword Arguments
bias
(Bool
,true
by default): True if there is a bias unit in this layer
call(::Type{FFNNLayer}, n::Integer, f::Function)
¶
Construct a 1-D layer of a Neural Network with n
neurons, f
as activation function and, eventually, a bias unit.
Arguments
n
(Int
): Number of neurons in this layer (not counting the eventual bias unit)f
(Function
): Activation function of this layer
Keyword Arguments
bias
(Bool
,true
by default): True if there is a bias unit in this layer
call(::Type{FFNNet}, layers::Array{FFNNLayer, 1}, inputsize::Int64)
¶
Construct a network given its layers and its input size.
Arguments
layers
(Vector{FFNNLayer}
): A vector with all the layers of the network (in order, with the last one being the output layer).inputsize
(Int
): Integer specifying the input size of the layer.
Returns
A Neural Network (FFNNet
)
call(::Type{FFNNet}, sizes::Int64...)
¶
Construct a network given the input size and the sizes of each layer. By default, the hidden layers have an bias unit and the output layer don't. One the other hand, all the layers have tanh
as activation function by default.
Arguments
sizes
(Int...
): Integers specifying the sizes for the network. The first is the input size and the rest is the size of each network layer, from the first one up to the size of the output layer.
Returns
A Neural Network (FFNNet{N,I}
)
ceerrorprime(output, target)
¶
Derivative of the cross-entropy error with respect to the outputs
quaderrorprime(output, target)
¶
Derivative of the quadratic error with respect to the outputs
⊗(a::Array{Float64, 1}, b::Array{Float64, 1})
¶
Outer product