Tim Cvetko
1 min readApr 23, 2024

--

LLNs adopt their architecture via spike-timing-dependent plasticity (STDP).

ChatGPT -> :))

Imagine training a Liquid State Machine (LSM) like teaching a group of interconnected water pipes to respond to different flows of water. Initially, you set up the pipes (the reservoir) randomly, but with certain properties to ensure interesting dynamics. Then, you expose the system to different patterns of water flow (training data), adjusting the pipes' connections (synaptic strengths) slightly each time to minimize how much the flow deviates from what you want (error). As you keep exposing the system to new flows, it starts to adjust itself to better handle them, like the water flowing through the pipes smoothly. After training, even if you introduce new flow patterns, the system adapts because its structure allows it to generalize from what it learned before, tweaking its internal connections on the fly to handle the new inputs effectively, much like how the water pipes adjust to accommodate new flows.

--

--

Tim Cvetko
Tim Cvetko

Written by Tim Cvetko

mlops @ sync.labs (yc w24) │writing about ai/business (e/acc)│ timcvetko.com

No responses yet