We analyze the computational power of discrete-time recurrent neural networks
(NNs) with the saturated-linear activation function within the Chomsky
hierarchy. This model restricted to integer weights coincides with binary-state
NNs with the Heaviside activation function, which are equivalent to finite
automata (Chomsky level 3) recognizing regular languages (REG), while rational
weights make this model Turing-complete even for three analog-state units
(Chomsky level 0). For the intermediate model
αANN of a binary-state NN
that is extended with
α≥0 extra analog-state neurons with rational
weights, we have established the analog neuron hierarchy 0ANNs
⊂ 1ANNs
⊂ 2ANNs
⊆ 3ANNs. The separation 1ANNs
⫋ 2ANNs has
been witnessed by the non-regular deterministic context-free language (DCFL)
L#={0n1n∣n≥1} which cannot be recognized by any 1ANN even with
real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with
rational weights. In this paper, we strengthen this separation by showing that
any non-regular DCFL cannot be recognized by 1ANNs with real weights, which
means (DCFLs
∖ REG)
⊂ (2ANNs
∖ 1ANNs), implying
1ANNs
∩ DCFLs = 0ANNs. For this purpose, we have shown that
L# is the
simplest non-regular DCFL by reducing
L# to any language in this class,
which is by itself an interesting achievement in computability theory.