What If Neural Networks Have the Signal Backwards?
First empirical results from the Uni-Bit Vector Gate project
Justin Harris | April 15, 2026
The Inversion
Here’s something that’s been bothering me for years: modern AI and biological brains process information in exactly opposite ways.
In your brain, the action potential — the electrical spike that travels down a neuron — is informationally stupid. It’s a binary switch: fire or don’t fire. One bit. The actual computational payload is carried by the neurotransmitter cocktail released at the synapse: serotonin, dopamine, GABA, glutamate, acetylcholine, neuropeptides — a rich ensemble of typed chemical signals that encode what the spike means.
In AI, it’s flipped. The activation vector is high-dimensional and information-rich. The weight is a boring scalar multiplier. The vector carries the meaning; the scalar just scales it.
What if we un-flip it?
The Architecture
I’ve been developing an architecture called the Uni-Bit Vector Gate. The idea is simple: demote the vector to a single binary switch (like a biological spike), and pack all the information into a scalar ensemble (like a synapse releasing neurotransmitters).
• 100 binary gates per layer — each one fires (1) or doesn’t (0)
• 10 scalar carriers per gate, grouped into 5 types (monoamine, GABA, glutamate, ACh, neuropeptide)
• When a gate fires, its carriers are released. When it doesn’t? Zero cost. The failed pathway is free.
Today I ran the first real experiments. Here’s what happened.
What Worked
The model learns
The architecture trains to 97.6% accuracy on the training set and 62.4% on held-out test data (vs. 50% random baseline). Binary gates + scalar ensembles can learn non-trivial compositional tasks. This was the first gate to pass, and it did.
Biological firing rates — emergent
This one genuinely surprised me.

Real neurons fire about 1–10% of the time. Layer 0 of the uni-bit model settled at 3.5% — right in the biological range. No sparsity penalty. No regularizer pushing it there. The architecture naturally becomes sparse because the information bottleneck (type-aggregated carriers) forces selective gating. It’s expensive to fire when your payload has to compress through type-level energy budgets.
The economics of failure in this architecture mirror biology: silence is free, firing is informative.
What Didn’t Work (Yet)
Conservation laws need reframing
The paper claimed that per-type carrier budgets are conserved during training, derived from Noether’s theorem. The experiments showed they’re not — budgets drift upward under soft penalties.

This one bothered me. The whole reason I landed on this thought was Noether’s theorem.
I struggled with this…we showed the linear mapping to real space at the start of the first paper I wrote.
But here’s the thing: Noether’s theorem isn’t wrong.
The issue was subtler.
The symmetry (O_{k,j} rotations within each carrier type) preserves the relative distribution of carriers within a type — the “recipe” — but not the total magnitude. The recipe is conserved. The dose isn’t.
The better biological analogy isn’t conservation — it’s homeostasis. Real neurons don’t passively conserve neurotransmitter levels. They actively regulate them through synthesis, reuptake, and degradation. The uni-bit architecture can enforce budget homeostasis via projection after each training step, and training still works perfectly. Active regulation, not passive conservation.
Hallucination detection: null result
The paper’s headline claim — that conservation violations predict failures — didn’t pan out at this scale. AUC = 0.49, essentially random. This might improve at larger scale, or it might be a clean negative. Either way, it’s honest data.
What I Learned
Three things:
1. The uni-bit inversion works. Binary gates + typed scalar ensembles can learn. The architecture is sound.
2. The biology runs deeper than expected. Firing rates, homeostatic regulation, the economics of silence — these aren’t just metaphors. They emerge from the math.
3. Honest nulls are publishable. The hallucination detection claim doesn’t hold at n=100. That’s a finding, not a failure.
What’s Next
Scaling up. More gates, more data, pharmacology simulations (what happens when you apply an “SSRI” to a neural network?), and baseline comparisons against standard architectures. The paper needs rigorous Noether derivations for the angular conservation that actually holds, and the conservation section gets reframed around homeostasis.
If you want to follow along, the code is being developed at TroponinIQ and the full research log is maintained in research-results.md in the project repo.