Brain-inspired computing by assemblies of neurons
Although deep neural networks have been widely adopted and proven effective across many applications, the backpropagation process remains complex and not easily interpretable in a way that mirrors human understanding. Humans are capable of swiftly grasping and interpreting complex patterns to make decisions, whereas neural networks depend on computationally intensive methods for learning.
In contrast, assembly calculus, as examined in the context of neuronal assemblies, offers a framework for understanding how groups of neurons collectively process information similar to cognitive functions. It emulates the way neurons in the brain fire in groups to transmit information, providing a biologically inspired mechanism that does not rely on backpropagation.
By utilizing assembly calculus, we can gain valuable insights into the dynamic interactions and transformations within neural networks, potentially leading to models that are more interpretable and aligned with human-like processing. This approach can deepen our understanding of data utilization and aid in the development of new architectures that more closely mimic human cognitive processes.
This project will explore assembly calculus in comparison with neural networks, focusing on their accuracy and efficiency across various real-world problems.
Reference:
[1] Papadimitriou, Christos H., et al. "Brain computation by assemblies of neurons." Proceedings of the National Academy of Sciences 117.25 (2020): 14464-14472.
[2] Dabagia, Max, Santosh S. Vempala, and Christos Papadimitriou. "Assemblies of neurons learn to classify well-separated distributions." Conference on Learning Theory. PMLR, 2022.
Supervisor: Ting Dang