All neurons can perform linearly non-separable computations

Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturation due to interacting synapses. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with interacting synapses has more computational capacity than without. Because all neurons have one or more layer, we show that all neurons can potentially implement linearly non-separable computations.


Introduction
We show here how interaction between synapses can extend the computational capacity of all neurons, even the tiniest. We already knew that dendrites might extend the computational capacity of some pyramidal neurons. Their extended dendrites capable of dendritic spikes changed the way we saw them (see Ref. 1 for one of the first articles presenting this idea). More recently a study suggested that we should model pyramidal neurons as a two layer neural networks. 2 This theoretical model was further consolidated by experiments showing that we can see a pyramidal neuron as a collection of non-linear subunits. 3 Certain non-linearities can even allow a dendrite to implement the exclusive or (XOR). 4 Moreover, a similar kind of non-monotonic non-linearity was found in human pyramidal neurons. 5 But what about other neurons with modest dendrites incapable of spiking?
Pyramidal neurons only represent a fraction of all neurons. For instance, the dendrites of cerebellar stellate cells cannot emit spikes, but they do saturate 6 and they can be decomposed into multiple independent subunits -with independent membrane voltages -turning them into two-stage units like the pyramidal neuron. 7 Previously we have shown that passive dendrites are sufficient to enable a neuron to perform linearly non-separable computations, for instance, the feature binding problem. 8 Here, we go one step further and focus on cells with a modest and passive dendritic tree. We use the fact that even in this case spatially nearby synapses can interact due to glutamate spillover (for review see Ref. 9). We show that these cells despite having a single voltage can compute linearly non-separable functions. In the present study. We use these neurons as the smallest common denominator, and we thus conclude that all neurons can perform linearly non-separable functions.

Methods
An integrate and fire neuron with interacting synapses (the SIF) We started from a leaky integrate and fire (LIF). This model has a membrane V modelled by the following equation: With τ = 20 ms the neuron time constant, v(t) the membrane voltage at time t and v E = À65 mV which sets the resting membrane voltage. R = 20 MΩ is the value of the resistance and I s (t) models the time varying synaptic inputs current. Each time the voltage reaches V t = À62 mV a spike is triggered and the voltage is resetted to À65 mV. We used the following equation to account for the synaptic inputs.
The synaptic current depends on the difference between v(t) the neuron voltage, equal everywhere, and E s the synaptic reversal potential (0 mV). In the present work, we have four input sources and contrary to what is done usually we have only two conductances g 1 and g 2 which collapse conductance from input 1,2 and 3,4 respectively. This account for the interaction between the input sources and do not consider them fully independent as it is usually the case. Each g i is bounded between 0 and 100 pS. Each g i jumps up instantaneously to its maximal value for each incoming input spike and decays exponentially with time constant τ s = 1 ms. In a LIF all synaptic inputs are gathered into a single umbrella and i = 1. In the present work we cluster synaptic inputs into 2 groups (one green and one blue, see Figure 1). We used the

REVISED Amendments from Version 2
We changed the title from "any" to "all" to further emphasized the generality of our findings and we slightly modify the intro to insist on the breadth of our work and cite a review on glutamate spillover.
We change the panel B and C of the figure to more closely follow the truth table, each panel respectively use two different interpretation of the 0s and the 1s.
We precised the peculiarity of equation 2 and justify it.
We corrected minor typos and we want to thank the reviewer for their valuable comments which improved the quality of the manuscript.
We upgraded the discussion to say that we do not use a reduction in driving force to implement sub linear summation.
Any further responses from the reviewers can be found at the end of the article Brian software version 2 to carry out our simulations, the code is freely available on the git repository attached with this report.

Boolean algebra refresher
First, let's present Boolean functions: Definition 1. A Boolean function of n variables is a function on {0, 1} n into {0, 1}, where n is a positive integer.
Importantly, we commonly assume that neurons can only implement linearly separable computations: Definition 2. f is a linearly separable computation of n variables if and only if there exists at least one vector w ∈ R n and a threshold Θ ∈ R such that: where X ∈ {0, 1} n is the vector notation for the Boolean input variables.

Results
The compact feature binding problem (cFBP) In this section, we demonstrate a class of compact linearly inseparable (non-separable) computations that we are going to study. These computations are compact because they only need to be defined on have four input lines. Changing the other input lines would not affect our result.
We entirely specify an example in Table 1. This computation that we call the compact feature binding problem (cFBP) is linearly inseparable.

Proposition 1. The cFBP is linearly inseparable (non-separable)
Proof. The output must be 0 for two disjoint couples (1,2) and (3,4) of active inputs. It means that w 1 + w 2 ≤ Θ, and w 3 + w 4 ≤ Θ, and we can add these two inequalities to obtain w 1 + w 2 + w 3 + w 4 ≤ 2Θ. However, the output must be 1 for two other couples made of the same active inputs (1,3) and (2,4). It means that w 1 + w 3 > Θ, and w 2 + w 4 > Θ, and we can add these two inequalities to obtain w 1 + w 2 + w 3 + w 4 > 2Θ. This yield a contradiction proving that no weights set exists solving this set of inequalities.
The cFBP is compact beauce it specifies only four lines of a function. A complete definition would include 16 distinct input/output relationship. This incomplete definition of the function leaving all the remaining input/output relation is the minimal. This computation is as complex as the famous exclusive OR (XOR). Note here that our SIF can also implement the XOR using a parameter set explained here [?]. However, contrary to the XOR it can be implemented with excitatory inputs and a monotone transfer function. 8 We can extend the cFBP by increasing the number of inputs. In this case we deal with tuples instead of couples. As such, the cFBP corresponds to an entire family of linearly inseparable computations, and a SIF can implement them using the strategy that we will present in the next section.
A LIF with its linear integration cannot implement such a computation. While a neuron with two groups of saturating synapses can easily implement it. We already proved how a ball-and-stick biophysical model can implement this computation in a previous study. 8 Implementing the cFBP in a saturating integrate and fire (SIF) We use two independently saturating conductances to implement the cFBP in a minimal extension of the LIF. The SIF has a single membrane voltage to account for its compactness so we might wonder how local saturation can arise in such a morphology. Saturation has two possible origins: (1) a reduction in driving force can cause saturation as in Ref. 6, but (2) it can also be due to the intrinsic limitations in conductance per unit of surface. This latter possibility makes saturation possible in an electrically compact neuron. In every cases the conductance is going to reach an upper bound per unit of surface and the only possibility to increase excitation consists in stimulating a larger area. We are going to employ this local bounding of the conductance to implement the cFBP in a SIF. To do that, we only need two saturating points as shown in Figure 1A. We can interpret the 0 s and 1 s in the truth table in at least two ways: (1) either the pre-or post-synaptic neurons activates (2) or they reach a given spike frequency.
In the following section, we will use the two interpretations. We first consider a pre-synaptic input active when it fires a 50 Hz poisson spike-train and inactive if it does not fire (this value is arbitrary and can largely vary to match a neuron working range). We stimulate our model in four distinct episodes to reproduce the truth table from the previous section. You can observe on Figure 1 two interpretation of the truth table: either a rate based or a spike based interpretation. In both cases we ca observe that locally bounding g enables to implement the cFBP. When g has no bound, the membrane voltage always reaches the spiking threshold at the same speed (LIF case). When we locally bound conductances the total input current is higher if inputs target two points rather than one (total g = 100 pS Vs g = 200 pS). All in all a SIF will respond differently for the clustered and scattered case while a LIF won't. This enables a SIF to implement the cFBP while a LIF can't.

Discussion/Conclusion
In this brief report, we introduced a small extension to the leaky integrate and fire neuron: a saturating integrate and fire neuron which can implement linearly non-separable computations. Moreover, we have shown here that two saturating points suffice. The SIF's multiple distinctly bounded g underlie this ability.
Importantly, a reduction in driving force is not the main actor triggering sublinear summation in a SIF. The threshold value guarantees V t = −62 mV that we are always far from the equilibrium voltage of the synapse E v = 0 mV. Furthermore a granule cell has a single membrane voltage through wich saturating groups of synapses would interactundermining their parallel processing. This would also be the case if the conductance were voltage-gated. The implementation of a linearly inseparable computation would have been impossible in a single compartment neuron because of interaction via the unique membrane potential. The usage of locally bounded g is crucial to make our prediction possible.
The experiment demonstrating this prediction seems straightforward. One would need to stimulate four independent groups of mossy fibres following our different scenarios. We could then record how a group of granule cell respond. This can be done using optogenetics reporting (i.e. calcium imaging). We predict that a significant part of Granule cells might implement the cFBP. This prediction could reveal the true potential of single neurons. The next step consists of looking at the network level as already done with spiking dendrites. 10 The origin of the sublinearity might not be certain, but it would be certain that these neurons implement a linearly inseparable computation.

Data availability
No data are associated with this article.

Software availability
• Source code available from: https://github.com/rcaze/21_03Ca/tree/1. In this report the author extends his previous work (ref 3.) demonstrating that simple neurons with two saturating nonlinearities can implement certain non-trivial computational problems, i.e., the feature binding problem (FBP). The novelty of the current implementation is that it places the nonlinearity to the synaptic conductance term in the input instead of to the reduction of the synaptic driving force. This way the FBP can be implemented with electrically compact neurons without the need for independent electrical subunits.

I have two main concerns:
Although the idea that nonlinear integration between different input streams could be implemented at the synaptic conductance level, actual experimental data supporting this hypothesis is not cited.

○
The authors introduce the Dendritic Integrate and Fire model as a variant of the Leaky Integrate and Fire harboring multiple groups of interacting synaptic conductance. Since the model does not assume that the sites target physically separate dendritic compartments, I found the name potentially misleading.

Specific comments:
The introduction states that cerebellar granule cells can be decomposed into multiple independent subunits, but the reference cited [ref. 8] does not directly imply this.

○
In Eq. 1. I_s is input current and not input conductance ○ I suggest to highlight that g in Eq. 2. denotes the total synaptic conductance associated with a group of input synapses, which is quite unusual assumption for most modelers.  Fig. 1C: It seems to me that the membrane potential is reset after each spike, but details of this reset are missing.  It is not clear why cFBP is compact. It is an n=4-dimensional problem, so it is defined by its 2^4=16 input-output pairs. Even if we restrict ourselves to the mappings with exactly 2 of the inputs being active, there are 6 of such pairs. I understand that it remains linearly non-○ separable no matter how we define the remaining two mappings, but the definition ('compact because they have four input/output lines') still feels somewhat vague and arbitrary.
The statement 'a reduction in driving force does not generate sublinear summation in a DIF' is false. Reduction of driving force would generate sublinear summation even in a DIF. What the author might want to say is that in this particular example sublinear integration was not associated with reduction of driving force.

○
Last paragraph: It is unclear how the proposed experiment would test whether the granule cells implementing cFBP use saturating input conductance or driving force reduction as a biophysical mechanism to solve the FBP.

If applicable, is the statistical analysis and its interpretation appropriate? Not applicable
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
-I proposed glutamate spillover as a candidate where nearby synapses interact. However, its effect on firing was never investigated to our knowledge. This work thus make a strong experimental prediction and give a computational role for glutamate spillover. I added two references to further support this argument.
The authors introduce the Dendritic Integrate and Fire model as a variant of the Leaky Integrate and Fire harboring multiple groups of interacting synaptic conductance. Since the model does not assume that the sites target physically separate dendritic compartments, I found the name potentially misleading.
-I renamed our model the Saturating Integrate and Fire, this model account for a neuron where synapses target distinct points where they interact. I now discuss this statement extensively in our conclusion.

Specific comments:
The introduction states that cerebellar granule cells can be decomposed into multiple independent subunits, but the reference cited [ref. 8] does not directly imply this.
-I precised that ref.6 concerns pyramidal neurons only and not granule cells. I also insist in the introduction on the fact that granule cells are isopotential structure impossibl to decompose into subunits.
In Eq. 1. I_s is input current and not input conductance -I corrected the mistake I suggest to highlight that g in Eq. 2. denotes the total synaptic conductance associated with a group of input synapses, which is quite unusual assumption for most modelers.
-I now underline this crucial difference and discuss it in our conclusion   -Spikes arised when we reached -62mV Fig. 1C: It seems to me that the membrane potential is reset after each spike, but details of this reset are missing.
-I precised the reset voltage in the method section. -This no longer apply in the current version of the manuscript It is not clear why cFBP is compact. It is an n=4-dimensional problem, so it is defined by its 2^4=16 input-output pairs. Even if we restrict ourselves to the mappings with exactly 2 of the inputs being active, there are 6 of such pairs. I understand that it remains linearly non-separable no matter how we define the remaining two mappings, but the definition ('compact because they have four input/output lines') still feels somewhat vague and arbitrary.
-The cFBP is compact because only four input lines need to be defined all the other remaining 12 can take any other value, we now explain this in the result section The statement 'a reduction in driving force does not generate sublinear summation in a DIF' is false. Reduction of driving force would generate sublinear summation even in a DIF. What the author might want to say is that in this particular example sublinear integration was not associated with reduction of driving force.
-The reduction in driving force would be insufficient in an isopotential neuron as it would affect the independence of g_1 and g_2. I now further underline this point in the discussion.
Last paragraph: It is unclear how the proposed experiment would test whether the granule cells implementing cFBP use saturating input conductance or driving force reduction as a biophysical mechanism to solve the FBP.
-I agree the experiment would only demonstrate that granule cells are capable of linearly inseparable computation. However, given their isopotential structure it is unlikely that saturation is due to a localized reduction in driving force. This point is now further emphasised in the discussion. We appreciate the clarifications/corrections made by the author and we support this work for indexing. We have some minor comments that do not change the impact of the work.
Page 5: "You can observe on Figure 1 that locally bounding g enables implement the of cFBP". Consider removing the 'of' preposition.
○ Figure Legend: "filled circles stand for a >50 Hz input spike train while empty circles stand for >50 Hz input spike train.", please change the second ">" to "<".
○ Figure 1B and C: In Figure 1B, the color scheme denotes the LIF vs. DIF models. The same color scheme is used in panel C to denote the scattered vs. clustered case. As it stands now, is confusing. Consider changing the line style on panel C. ○ Figure Legend: "In the clustered case (grey), the neuron reach spike threshold three times whereas it reaches spike thresold seven times in the scattered case (black)." Please correct "thresold" to "threshold". Also, from the figure, in the in the cluster case it reaches the spike threshold two times (not three).

○
Please update the Github link to point to the revised code.

○
There is a mismatch with code and text: code lines 71 and 77: the g is bounded between 0 and 0.1 nS (i.e., 100pS), while in the text referred to as 10pS.
○ Figure 1 Legend: We suggest adding the respective episodes of the truth table in Table 1.

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results? Yes Figure Legend: "filled circles stand for a >50 Hz input spike train while empty circles stand for >50 Hz input spike train.", please change the second ">" to "<". Definition 2: the number of variables denoted with n should be in italic for consistency.
-I corrected these mistakes which were introduced during the editorial process Figure 1B and C: In Figure 1B, the color scheme denotes the LIF vs. DIF models. The same color scheme is used in panel C to denote the scattered vs. clustered case. As it stands now, is confusing. Consider changing the line style on panel C.
-I worked on the figure so panel B and C do not conflict we each other anymore Figure Legend: "In the clustered case (grey ), the neuron reach spike threshold three times whereas it reaches spike thresold seven times in the scattered case (black)." Please correct "thresold" to "threshold". Also, from the figure, in the in the cluster case it reaches the spike threshold two times (not three).
-The panel has now changed and we rewrote the legend.
Please update the Github link to point to the revised code.
There is a mismatch with code and text: code lines 71 and 77: the g is bounded between 0 and 0.1 nS (i.e., 100pS), while in the text referred to as 10pS.
-I updated the git repo and corrected the text to solve this mismatch  table in Table 1.
-I followed the reviewer's suggestion.

Version 1
Reviewer Report 13 July 2021 https://doi.org/10.5256/f1000research.57398.r89096 © 2021 Papoutsi A. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Institute of Molecular Biology & Biotechnology, Foundation for Research & Technology -Hellas, Heraklion, Greece
This Brief Report shows at a conceptual level that electrically compact neurons can solve nonlinearly separable computations of four or more inputs. This is a result of the saturating responses to 'clustered' input at the dendritic level (simulating mainly the reduction of the driving force in the dendrites), and increased response to 'scattered' input at the somatic level. This study expands on previous work of the author and others and adds on the range of computations neurons can perform with their dendrites.
We have two major concerns that limit the clarity of this work: Figure 1B and 3rd paragraph on page 5: The x-axis label states 'Time', and the relevant text states that "the membrane voltage takes more time to reach threshold in the clustered case 1.
(total g = 10pS) than in the scattered case (total g = 20pS)". Given that this work is based on an arbitrary thresholding of the output frequency, it is not obvious where time is involved and its meaning in the x-axis.
In the provided code on GitHub (line 78 in the code), the ceiling of the second dendrite (i.e., syn2 in the code) is set to 0.5 and not to 0.1. Please clarify the value used. If those different saturating thresholds were indeed used, this should be explicitly stated and reasoned in the main text.