More Keras additions
This merge request makes a few additions to the Keras classifier:
-
Change default_delta_time to -window to be consistent with sklearn classifiers.
-
Be able to add in regularizers to layers through the INI if specified (l1, l2 or l1_l2).
-
Expose convenience variables that can be used in the INI, like
Ncol
,Nchan
, orNtotal
to create layers based on these, e.g.0.5Nchan
. -
Change how validation test sets are passed into Keras models, in that now we create shuffled train and validation sets from sklearn based on the full dataset, and then pass those into the model. This provides models that provide more accurate validation metrics, and they appear to have better ROC curves.
-
Add LocallyConnected1D and Dropout layers as possible options. The locally connected layer can be seen as doing the following (for a kernel_size and stride of 2):
graph TD;
snr_1-->A;
dt_1-->A;
snr_2-->B;
dt_2-->B;
snr_3-->C;
dt_3-->C;
while a dense layer would instead do:
graph TD;
snr_1-->A;
dt_1-->A;
snr_2-->A;
dt_2-->A;
snr_3-->A;
dt_3-->A;
snr_1-->B;
dt_1-->B;
snr_2-->B;
dt_2-->B;
snr_3-->B;
dt_3-->B;
snr_1-->C;
dt_1-->C;
snr_2-->C;
dt_2-->C;
snr_3-->C;
dt_3-->C;
Here's what some of the configurations look like:
[ANN]
...
balanced = True
loss = binary_crossentropy
optimizer = adam
metrics = accuracy
validation_split = 0.2
layers =
Local1D 1 Ncol Ncol relu
Dense 0.1Nchan relu l1 0.01
Dense 1 sigmoid
This creates a neural network with three layers:
- A locally-connected 1D layer with kernel and strides set to Ncol, with a relu activation function
- A dense layer with Nchannels/ 10 nodes, a relu activation function and l1 regularization with a penalty of 0.01
- a single-node layer with sigmoid activation