← liminaut.dev

Liminaut Project — Reference Document v1

The Threshold Codex

I kept noticing the same patterns showing up in neuroscience, in the circuits I was building, and in symbol systems I'd been reading about for years. This is me trying to map those connections out. I'm not an expert in any of these fields. I'm just someone who thinks they might be describing the same thing from different angles.

⚡🜂☿
I The Membrane — Integration and Leak 🜄 ∫ capacitor
Neural Model
Passive membrane
Real neurons have a lipid bilayer that acts like a capacitor. Charge builds up across it when ion channels let current in. The resistance and capacitance aren't metaphors, they're literal electrical properties of biological tissue.
Circuit Implementation
R_leak x C_mem
In my build this is a 220kΩ leak resistor and a 10µF capacitor. The voltage across the cap is V_mem. The resistor is always slowly pulling it back toward zero. τ = RC works out to about 2.2 seconds.
Formal Math
τ · dV/dt = -V + R·I(t)
The voltage decays toward zero at rate -V/τ while input current I keeps charging it back up. When it stabilizes, dV/dt = 0 and V = R·I. I had to look up what dV/dt meant but it's just "how fast voltage is changing right now."
Symbolic Correspondences
🜄 ⚱ 杯 ᚦ
🜄 Alchemical Water — receptive, accumulating. Felt like the right element for something that collects and holds.

杯 (Bēi, Chinese) — cup or vessel. Same idea of a container that fills up.

ᚦ Thurisaz (Norse rune) — potential force held before release. The charged state waiting to cross threshold.
Neural Model
Passive leak current
K+ leak channels let charge slowly drain out. Without input the membrane just drifts back to resting potential around -65mV. It forgets what happened.
Circuit Implementation
R_leak to GND
The 220kΩ from membrane node to ground. It's always on, always pulling V_mem toward zero. Swap a bigger resistor in and the neuron leaks slower and has a longer memory of past inputs.
Formal Math
V(t) = V₀ · e^(-t/τ)
Exponential decay. After one τ the voltage is at 37% of where it started. After 5τ it's basically gone. This is just the math of any RC circuit discharging.
Symbolic Correspondences
🜔 ♄ 無 𓇋
🜔 Alchemical Salt — dissolution, return to base state. What's left when everything else is gone.

♄ Saturn — time, slow return, entropy. The planet associated with patience and decay.

無 (Wú, Taoist) — nothingness, the ground state. What the membrane returns to.

𓇋 (Egyptian) — stability, the resting column.
II The Threshold — Comparator and Spike 🜂 δ LM393
Neural Model
Action potential initiation
When V_mem hits around -55mV, voltage-gated sodium channels snap open all at once. It's all-or-nothing, the neuron either fires fully or doesn't fire at all. There's no partial spike.
Circuit Implementation
LM393 open-collector
V_mem goes to IN+ (pin 3), threshold voltage from a 10kΩ divider goes to IN- (pin 2). When membrane voltage wins, output pulls LOW. Needed a 470Ω pullup to get it high enough to actually drive the next neuron.
Formal Math
V_mem(t) >= V_th → spike
It's basically a Heaviside step function. Zero below threshold, one above. The spike itself is a Dirac delta, δ(t-t_i), which is a math way of saying "instantaneous event at time t_i." I found this in Dayan and Abbott.
Symbolic Correspondences
🜂 ⚡ 雷 ᚱ ☈
🜂 Alchemical Fire — sudden transformation, the irreversible event. Felt obvious for the firing moment.

雷 (Léi, Chinese) — thunder, sudden discharge. Also the I Ching hexagram of arousal.

ᚱ Raidho (Norse) — journey begun, threshold crossed.

☈ Vajra (Vedic) — thunderbolt, instantaneous transmission of force.
Neural Model
Refractory period
Right after firing, sodium channels inactivate and potassium channels open. The membrane goes below resting potential for a bit. The neuron physically can't fire again for around 1-2ms.
Circuit Implementation
Reset feedback R
100k-220kΩ from the LM393 output back to the membrane node. After the spike it drains the cap. Took me a while to figure out you can't let this self-oscillate or it latches. Lower R means faster reset and higher possible firing rate.
Formal Math
V → V_reset after t_i
The reset is modeled as instantaneous. V_mem jumps to V_r right after the spike. During the refractory window τ_ref the neuron ignores input entirely, modeled as a gate H(t-t_i-τ_ref).
Symbolic Correspondences
☽ 死 ᛃ 🝐
☽ Waning Moon — withdrawal, the pause after peak.

死 (Sǐ, Chinese) — death/renewal. The neuron briefly goes dark and comes back.

ᛃ Jera (Norse) — the year cycle, harvest then fallow. Necessary rest.

🝐 Alch. Putrefaction — decomposition before the next transformation.
III Spike Trains — Time, Sequence and Memory ☿ ρ(t) Σδ
Math Expression Symbol Name and Origin Neural Meaning Circuit Analog
ρ(t) = Σ δ(t-tᵢ) Mercury / Hermes
Greek/Roman alchemy
The neural response function. Just a list of all the spike moments across time. The messenger carrying information between neurons. This is literally the LM393 output signal. Each LOW pulse is one δ(t-tᵢ). The full pulse train over time is ρ(t).
r(t) = (1/Δt)∫ᵗ⁺ᐩᵗ ρ(τ)dτ 𓂀 Eye of Horus
Egyptian
Firing rate. How many spikes per second averaged over a time window. The eye that counts and perceives rate of change. The Pico counting pulses per window. The 74HC595 shift register converts spike rate into a digital value the SNN layer can read.
C(τ) = ∫ ρ₁(t)ρ₂(t+τ)dt Naudhiz (Norse rune)
Elder Futhark
Spike cross-correlation. Measures how much two neurons fire together at a given lag τ. High correlation means they tend to fire together. The rune of necessity and binding. This is the basis of STDP detection. Pre and post spike traces get compared. It's what decides whether a synapse strengthens or weakens.
ISI = tᵢ₊₁ - tᵢ 間 (Ma) Ma, Negative Space
Japanese aesthetic
Inter-spike interval. The time between spikes. Information is in the gaps as much as in the events themselves. Ma is the meaningful pause. The RC time constant sets the minimum ISI. Refractory period sets a hard floor. Mixing different τ values across neurons creates varied ISI patterns across the network.
CV = σ_ISI / μ_ISI Ascending Node
Astrological
Coefficient of variation of ISI. CV=0 is a perfect clock. CV=1 is random noise. Real neurons sit around 0.5 to 1.2. Variability itself carries information. Noisy MCP6002 in the sensory layer pushes CV toward biological range. The precision OPA2340 in the output layer brings it back toward deterministic.
τ = RC Hourglass / Kronos
Greek / universal
Time constant. How long the neuron holds onto its past. Small τ forgets fast. Large τ integrates slowly and holds context longer. Fast: 47kΩ×10µF = 0.47s. Medium: 100kΩ×10µF = 1s. Slow: 470kΩ×10µF = 4.7s. Very slow: 220kΩ×47µF = 10s. The resistor and cap are the cell's personality.
IV Synapses — Weight, Plasticity and STDP 🜍 Δw ∂
Neural Model
Synaptic weight
How strongly one neuron influences another. Depends on receptor density, how much neurotransmitter gets released, the physical shape of the synapse. It's not fixed, it changes based on activity.
Circuit Implementation
Multi-substrate
Three different materials for three different regimes. Ag-agar ECL for sensory inputs (noisy, biological). Alumina/graphite DES paste for association layer (reproducible). MCP4131 digital pots for output layer (deterministic). The messiness is intentional in the sensory layer.
Formal Math
I_syn = w · g(t) · (V_syn - V)
Synaptic current = weight times conductance times driving force. g(t) decays after a spike. V_syn is positive for excitatory synapses and negative for inhibitory ones. w is what STDP changes.
Symbolic Correspondences
🜍 ⚖ 縁 ᛜ
🜍 Alch. Conjunction — union of two substances. The bond between two circuits.

⚖ Ma'at scales — weight, balance. The synapse weighs evidence before passing it on.

縁 (En, Japanese) — karmic connection, relational bond, the thread between things.

ᛜ Ingwaz (Norse) — seed, stored potential waiting to be released.
Neural Model
STDP, Hebbian plasticity
If a synapse's pre-synaptic neuron fires just before the post-synaptic one, the connection strengthens (LTP). If the order is reversed, it weakens (LTD). "Neurons that fire together wire together" is the shorthand. The timing window is around 20ms in biology.
Circuit Implementation
DIY electrochemical
Pre and post spike traces each have their own RC decay. When they overlap in time, the correlation drives current through the memristive element. Silver migrates in the agar or alumina reduces in the DES paste. The physical material changes and that's the memory.
Formal Math
Δw = A₊·e^(-Δt/τ₊) if Δt>0
Δw = -A₋·e^(Δt/τ₋) if Δt<0
Δt = t_post minus t_pre. Positive Δt means pre fired first so weight goes up. Negative means post fired first so weight goes down. The effect decays exponentially the further apart in time the spikes were.
Symbolic Correspondences
☯ 业 ᚢ 🝊
☯ Taijitu — LTP and LTD as two complementary forces. Causality and its inverse.

业 (Karma) — timing determines consequence. The synapse remembers what caused what.

ᚢ Uruz (Norse) — strength, the force that shapes the path.

🝊 Alch. Fermentation — slow internal change through sustained process.
Neural Model
Luminol ECL visualization
When silver nanoparticles form in the agar during a switching event, they amplify a luminol electrochemiluminescence reaction. The synapse literally glows when it switches. I found this while looking for a way to visualize what the agar was doing.
Circuit Implementation
Ag-agar full stack
Agar + AgNO3 + KNO3 + glycerol 10-15% + PAA for longer memory (200-4000s) + luminol + Na2CO3 buffer at pH 9. The silver nanoparticles amplify ECL somewhere between 1000 and 1770x. Switching happens between 1 and 3V.
Formal Math
I_ECL ∝ [Ag+]·[luminol]·e^(-Ea/kT)
Light intensity scales with silver ion concentration, luminol concentration, and an Arrhenius temperature term. The nanoparticles lower the activation energy Ea through surface plasmon resonance. I mostly take this on faith and look at the glow.
Symbolic Correspondences
🌕 光 ᛊ ☽☉
光 (Guāng, Chinese) — light, the moment of becoming visible.

ᛊ Sowilo (Norse) — the sun rune, the flash of understanding.

☽☉ Luna-Sol (alchemy) — silver is the moon metal, receptive. Current activates it. Their meeting produces light.

φῶς (Greek) — phosphorescence, light-bearing, fire made chemical.
V Network Dynamics — Emergence and Architecture ∞ 網 ᚦ
Concept Symbol Name and Origin Mathematical Form Circuit / Build Analog
Coincidence detection Logical AND / Indra's Net
Buddhist metaphysics
fire if Σwᵢxᵢ >= θ, all inputs near-simultaneous A fast τ neuron (47kΩ×10µF) that only fires if two inputs arrive close together. If they don't arrive within one τ window the charge decays before threshold. A physical AND gate.
Oscillator pair ☯ ∞ Ouroboros
Egyptian/Greek/Norse
V̇ = f(V) + g(V)(-V_inh) Two neurons that inhibit each other. When one fires it suppresses the other via an NPN transistor inverter. They take turns. Anti-phase oscillation, basically the simplest rhythm generator you can build.
Winner-take-all ⚔ 一 Yī (Unity) / Agonism
Chinese / Greek
xᵢ* = argmax(xᵢ) via lateral inhibition Two neurons get the same input. First one to cross threshold inhibits the other. Only one gets to fire. It's a decision circuit.
Reverberant loop ∮ 螺 Spiral / Enso
Japanese Zen
x(t+1) = f(x(t)), recurrent attractor A neuron that excites itself via a delayed path. Keeps firing after the input is gone. The simplest possible working memory, a circuit that remembers by not stopping.
Heterogeneous τ 𝄞 時 Time (Shí) / Musical time
Chinese / universal
τᵢ ∈ {0.47, 1.0, 4.7, 10.0}s Fast neurons catch quick changes. Slow neurons integrate over long windows. Just by choosing different R and C values for each neuron you get a network that responds to temporal patterns automatically.
Emergent behavior Tao, The Way
Chinese Taoism
P(system) != Σ P(parts) The goal. The network doing something none of the individual neurons could do alone. Not programmed, not simulated. Built from components on a breadboard and figured out over time.
WHERE THIS IS GOING — THE MINI-STACK MODEL 🧠 ⬡ 板

The plan isn't to keep cramming more neurons onto one board. At some point that stops working for the same reasons a real brain doesn't just scale up as one big blob. The idea is to build small clusters of neurons, each one tuned for a specific role, and then connect them together the way brain regions connect.

The R and C values pick the personality of each neuron. Fast τ neurons (47kΩ×10µF, about 0.47s) go in sensory positions because they respond quickly and forget fast. Slow τ neurons (220kΩ×47µF, about 10s) go in integrator positions because they accumulate context over time. The resistor and capacitor aren't just circuit components, they're deciding what kind of neuron this is.

The op-amp choice matters too. Noisy MCP6002s in the sensory layer because biological sensory neurons actually have variance. OPA2340s in the association layer for reproducible behavior. LT1013 precision parts in the output layer where things need to be deterministic. The noisiness is designed in at the sensory end and designed out by the output end.

Each cluster gets its own board eventually. The plan is bismuth-poured wooden PCBs, one per brain region, connected by spike bus wires between them. The cables aren't just wiring, they're the analog of white matter tracts connecting brain areas. The physical separation is part of the architecture.

Rough neuron targets: model 1 was around 20 neurons, a basic reflex arc. model 2 around 70 neurons, enough for basic adaptive behavior. model 3 around 150 neurons across clustered boards. C. elegans navigates and learns with 302 neurons. Somewhere in that range is where this gets genuinely interesting. Built from salvaged parts, figured out as it goes.

liminaut.dev · @druidtech · Circuit Circle