Metric¶
A Metric specifies the fitness function measuring the performance of the simulation. This function gets applied on each simulated trace. A few metrics are already implemented and included in the toolbox, but the user can also provide their own metric.
Mean Square Error¶
MSEMetric
is provided for
use with TraceFitter
.
It calculates the mean squared difference between the data and the simulated
trace according to the well known formula:
It can be initialized in the following way:
metric = MSEMetric()
Additionally, MSEMetric
accepts an optional input argument start time t_start
(as a
Quantity
). The start time allows the
user to ignore an initial period that will not be included in the error
calculation.
metric = MSEMetric(t_start=5*ms)
Alternatively, the user can specify a weight vector emphasizing/de-emphasizing certain parts of the trace. For example, to ignore the first 5ms and to weigh the error (in the sense of the squared error) between 10 and 15ms twice as high as the rest:
# total trace length = 50ms
weights = np.ones(int(50*ms/dt))
weights[:int(5*ms/dt)] = 0
weights[int(10*ms/dt):int(15*ms/dt)] = 2
metric = MSEMetric(t_weights=weights)
Note that the t_weights
argument cannot be combined with t_start
.
In OnlineTraceFitter
,
the mean square error gets calculated in online manner, with no need of
specifying a metric object.
GammaFactor¶
GammaFactor
is provided for
use with SpikeFitter
and measures the coincidence between spike times in the simulated and the target
trace. It is calculcated according to:
\(N_{coinc}\) - number of coincidences
\(N_{exp}\) and \(N_{model}\)- number of spikes in experimental and model spike trains
\(r_{exp}\) - average firing rate in experimental train
\(2 \Delta N_{exp}r_{exp}\) - expected number of coincidences with a Poission process
For more details on the gamma factor, see:
- Jolivet et al. 2008, “A benchmark test for a quantitative assessment of simple neuron models”, J. Neurosci. Methods.
- Clopath et al. 2007, “Predicting neuronal activity with simple models of the threshold type: adaptive exponential integrate-and-fire model with two compartments.”, Neurocomp
The coincidence factor \(\Gamma\) is 1 if the two spike trains match exactly
and lower otherwise. It is 0 if the number of coincidences matches the number
expected from two homogeneous Poisson processes of the same rate. To turn the
coincidence factor into an error term (that is lower for better matches), two
options are offered. With the rate_correction
option (used by default), the
error term used is
\(2\frac{\lvert r_\mathrm{data} - r_\mathrm{model}\rvert}{r_\mathrm{data}} - \Gamma\),
with \(r_\mathrm{data}\) and \(r_\mathrm{model}\) being the firing rates
in the data/model. This is useful because the coincidence factor \(\Gamma\)
on its own can give high values (low errors) if the model generates many more
spikes than were observed in the data; this is penalized by the above term. If
rate_correction
is set to False
, \(1 - \Gamma\) is used as the
error.
Upon initialization the user has to specify the \(\Delta\) value, defining the maximal tolerance for spikes to be considered coincident:
metric = GammaFactor(delta=2*ms)
Warning
The delta
parameter has to be smaller than the smallest inter-spike
interval in the spike trains.
FeatureMetric¶
FeatureMetric
is provided
for use with TraceFitter
.
This metric allows the user to optimize the match of certain features between
the simulated and the target trace. The features get calculated by Electrophys
Feature Extract Library (eFEL) library, for which the documentation is
available under following link: https://efel.readthedocs.io
To get a list of all the available eFEL features, you can run the following code:
import efel
efel.api.getFeatureNames()
Note
Currently, only features that are described by a single value are supported (e.g. the time of the first spike can be used, but not the times of all spikes).
To use the FeatureMetric
,
you have to provide the following input parameters:
stim_times
- a list of times indicating start and end of the stimulus for each of input traces. This information is used by several features, e.g. thevoltage_base
feature will consider the average membrane potential during the last 10% of time before the stimulus (see the eFel documentation for details).feat_list
- list of strings with names of features to be usedcombine
- function to be used to compare features between output and simulated traces (uses the absolute difference between the values by default).
Example code usage:
stim_times = [(50*ms, 100*ms), (50*ms, 100*ms), (50*ms, 100*ms), (50, 100*ms)]
feat_list = ['voltage_base', 'time_to_first_spike', 'Spikecount']
metric = FeatureMetric(traces_times, feat_list, combine=None)
Note
If times of stimulation are the same for all of the traces, then you can
specify a single interval instead: traces_times = [(50*ms, 100*ms)]
.
Custom Metric¶
Users are not limited to the metrics provided in the toolbox. If needed, they
can provide their own metric based on one of the abstract classes
TraceMetric
and SpikeMetric
.
A new metric will need to specify the following functions:
get_features()
- calculates features / errors for each of the simulations. The representation of the model results and the target data depend on whether traces or spikes are fitted, see below.
get_errors()
- weights features/multiple errors into one final error per each set of parameters and inputs. The features are received as a 2-dimensional
ndarray
of shape(n_samples, n_traces)
The output has to be an array of lengthn_samples
, i.e. one value for each parameter set.
calc()
- performs the error calculation across simulation for all parameters of each round. Already implemented in the abstract class and therefore does not need to be reimplemented necessarily.
TraceMetric¶
To create a new metric for
TraceFitter
, you have
to inherit from TraceMetric
and overwrite the get_features
and/or
get_errors
method. The model traces for the
get_features
function are provided as a 3-dimensional
ndarray
of shape (n_samples, n_traces, time steps)
,
where n_samples
are the number of different parameter sets that have been
evaluated, and n_traces
the number of different stimuli that have been
evaluated for each parameter set. The output of the function has to take the
shape of (n_samples, n_traces)
. This array is the input to the
get_errors
method (see above).
class NewTraceMetric(TraceMetric):
def get_features(self, model_traces, data_traces, dt):
...
def get_errors(self, features):
...
SpikeMetric¶
To create a new metric for
SpikeFitter
, you have
to inherit from SpikeMetric
.
Inputs of the metric in get_features
are a nested list
structure for the spikes generated by the model: a list where each element
contains the results for a single parameter set. Each of these results is a list
for each of the input traces, where the elements of this list are numpy arrays
of spike times (without units, i.e. in seconds). For example, if two parameters
sets and 3 different input stimuli were tested, this structure could look like
this:
[
[array([0.01, 0.5]), array([]), array([])],
[array([0.02]), array([]), array([])]
]
This means that the both parameter sets only generate spikes for the first input stimulus, but the first parameter sets generates two while the second generates only a single one.
The target spikes are represented in the same way as a list of spike times for
each input stimulus. The results of the function have to be returned as in
TraceMetric
, i.e. as a 2-d array of shape
(n_samples, n_traces)
.