Mber is independent of R.The spacing among contours is plus the asterisk pffiffiffi labels the minimum at vjj ; v ; this corresponds for the triangular lattice..eLife.whilst, as we’ll see, the combination v(i li) measures a packing density of discs placed around the grid lattice.This suggests that we should really separate the minimization of neuron number into very first optimizing the lattice after which optimizing ratios.After undertaking so, we can verify that the result is definitely the global optimum.To get the optimal lattice geometry, we can ignore the resolution constraint, as it depends only on the scale factors and PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/21486643 not the grid geometry.We may well then exploit an equivalence involving ourWei et al.eLife ;e..eLife.ofResearch articleNeuroscienceoptimization trouble plus the optimal circlepacking trouble.To determine this connection, take into account putting disks of diameter li on each vertex in the grid at scale i .As a way to prevent ambiguity, all points of your grid i have to be separated by at the least li equivalently, the disks should not overlap.The density of disks is proportional to li v that is proportional towards the reciprocal of each i term in N.Thus, minimizing neuron quantity amounts to maximizing the packing density; and the noambiguity constraint needs that the disks usually do not overlap.This is the optimal circle packing trouble, and its resolution in two dimensions is identified to be the triangular lattice (Thue,), pffiffiffi so vjj and v .Moreover, the grid spacing need to be as small as allowed by the noambiguity constraint, providing i li .pffiffi r ri We have now lowered the problem to minimizing N d i , more than the scale factors i , when fixing the resolution R.This optimization trouble is mathematically precisely the same as in one dimension if ri ri we formally set ri .This offers the optimal ratio e for all i (Figure C).We conclude that in two pffiffiffi dimensions, the optimal ratio of neighboring grid periodicities is e for the easy winnertakeall decoding model, plus the optimal lattice is triangular.The optimal probabilistic decoding model from above can also be extended to two dimensions with the posterior distributions P(xi) becoming sums of Gaussians with peaks on the twodimensional lattice.In analogy using the onedimensional case, we then derive a formula for the resolution R m with regards to the standard deviation m of the posterior given all scales.The quantity m could be explicitly calculated as a function with the scale components i and also the geometric elements vjj ; v , plus the minimization of neuron r number could then be carried out numerically (Optimizing the grid program probabilistic decoder, `Materials and methods’).In this strategy, the optimal scale issue turns out to become i (Figure C), r plus the optimal lattice is once again triangular (Figure D).Attractor network models of grid formation readily generate triangular lattices (Burak and Fiete,); our evaluation suggests that this architecture is functionally effective in decreasing the essential variety of neurons.Even though our two decoding tactics lie at extremes of complexity (one particular relying just around the most active cell at every scale and a different optimally pooling details within the grid population) their respective `optimal intervals’ substantially Lumicitabine Inhibitor overlap (Figure C; see Figure B in ‘Materials and methods’ for the onedimensional case).This indicates that our proposal is robust to variations in grid field shape and to the precise decoding algorithm (Figure C).The scaling ratio r may well lie anywhere within a b.