Exploring Neuroplasticity in Bio-Inspired Neural Networks
During the 2024 Neuromatch Academy, our team embarked on an ambitious project to explore the mechanisms of neuroplasticity in artificial neural networks. Drawing inspiration from the remarkable ability of C. elegans to recover from neural damage, we investigated whether bio-inspired neural architectures could exhibit similar adaptive capabilities when subjected to sensor deprivation. Our work builds upon the Neural Circuit Architectural Priors (NCAP) framework introduced by Bhattasali et al. (2022), extending it to explore a critical question: Can these networks exhibit neuroplasticity-like behaviors when damaged?
Background: The NCAP Architecture
The foundation of our study is the NCAP architecture developed by Bhattasali et al., which represents a fascinating intersection of neuroscience and artificial intelligence. This architecture translates the well-mapped neural circuits of C. elegans - one of the simplest organisms with a complete connectome - into a discrete-time artificial neural network.
The architecture consists of several biologically-inspired elements: modular structure where each body segment is controlled by a repeated microcircuit, constrained connectivity with synapses being either excitatory or inhibitory, specialized units including B neurons (sensory-motor neurons) and oscillator units, and remarkably sparse connectivity with only 4 trainable parameters through aggressive weight sharing.

Experimental Design
We designed three categories of experiments to probe the network's adaptive capabilities:
1. Sensor Ablation Patterns
- Alternating Pattern: Removing sensors 1,3,5... (every other sensor)
- Sequential Pattern: Removing sensors 1,2,3... (consecutive sensors)
2. Damage Scenarios
- "Born Without": Networks trained from scratch with specific sensors permanently disabled
- "Removed at Test": Fully trained networks with sensors disabled only during evaluation
- "Damage and Retrain": Pre-trained networks damaged and then allowed to adapt through continued training
Key Results
Our experiments revealed fascinating differences between damage patterns. The alternating pattern showed more gradual performance degradation with interesting recovery at maximum damage (667 vs 643), suggesting compensatory mechanisms. The sequential pattern exhibited steeper performance decline with less evidence of compensatory adaptation.

Deep Learning Insights
Sparse Connectivity as a Feature
The NCAP's extreme sparsity (4 parameters vs. thousands in MLPs) initially seems like a limitation. However, our experiments suggest this constraint actually enhances robustness. The forced modularity and local connectivity patterns create natural redundancy - when one module fails, others can partially compensate.
Reinforcement Learning Considerations
The retraining experiments highlight an important RL principle: behavioral adaptation through policy modification. When sensors are damaged, the optimal policy changes. The NCAP architecture's modular structure allows for local policy adjustments without catastrophic forgetting of the overall locomotion strategy. We used Proximal Policy Optimization (PPO) for training, leveraging its stability and sample efficiency for continuous control tasks.
Implications and Future Directions
Our findings suggest that neuroplasticity in artificial networks may emerge not from complex learning rules but from appropriate architectural constraints. The NCAP's bio-inspired structure naturally supports graceful degradation and recovery.
These insights could inform the design of more robust robotic control systems: fault-tolerant robotics that maintain functionality despite sensor failures, adaptive prosthetics that can adjust to changing sensory feedback, and resilient autonomous systems for vehicles or drones that adapt to damage.
Future research directions include exploring dynamic architecture modification where networks can grow new connections in response to damage, investigating multi-timescale adaptation with fast compensatory mechanisms vs. slow structural changes, and testing these principles on more complex morphologies like quadruped or humanoid architectures.
Conclusion
Our exploration of neuroplasticity in bio-inspired neural networks reveals that architectural design choices profoundly impact a system's ability to adapt to damage. The NCAP architecture, despite its minimalist design, exhibits remarkable resilience that emerges from its biological constraints rather than despite them. This work demonstrates that the intersection of neuroscience and artificial intelligence continues to yield valuable insights for building truly robust AI systems.
This project was completed as part of Neuromatch Academy 2024. Special thanks to the NCAP paper authors for providing the foundation for this exploration.