Memory is a vast field in the study of brain science. The literature related to memory research is as vast as the sea, some of which involves different types of memory: episodic memory, declarative memory, implicit memory, and so on; some involve the duration of memory: long-term memory, short-term memory; some involve memory research on different species: humans, macaques, mice, fruit flies; some involve the relationship between memory and perception, attention, thinking, emotions, and so forth.
Although many people flock to study "memory" every year, "memory" itself is not suitable for understanding the working principles of the brain.
The term "memory" originally reflected a capability from the perspective of behavioral performance or introspection, just like "free will," both are a superficial understanding of ourselves. The capabilities we truly possess may not be called "memory" or "free will," and the brain may not have areas that correspond to these concepts and can be clearly separated. However, we obviously like "memory." When computers were first designed, people believed that computers should have a special "memory module" just like humans, which led to the various hard drives, memory sticks, and of course, video memory that we are familiar with today. In the field of brain science, people believe that there are many neural correlates that support different types of memory. Perhaps most doctoral students engaged in brain research regard "memory" as a kind of "entity," and people "pursue the traces of memory," just as physicists once pursued the traces of "ether."
However, do we really need "memory"? Do we really have "memory"?
My view is that there is no so-called "memory system" in our brain. If there is, it is the result of "artificial insistence" and is an artifact.
Let's start with a simple truth: Do you need to consider a special memory module when designing an artificial neural network? I believe most people who have trained neural networks would say no. The reason a neural network can recognize faces is not because we have specially equipped it with a face database module, but because we have given different weights to all its "connections" through a large amount of training, which has given it the ability to process photo information, not memory ability. This processing ability allows it to tell us who appears in the photo. Therefore, our starting point for thinking is not "how to remember," but "how to process."
Next, let's consider some examples from neuroscience. Our visual system is a hierarchical system, from V1 to the IT cortex, processing visual information from simple and fine to complex and general, from specific to abstract, and from individual to the association between multiple. Many people believe that these cortices do not have memory functions until the information is transmitted to the hippocampus and begins to be remembered. This view forcibly cuts the brain, feeling that processing belongs to processing, and memory belongs to memory, which seriously "artificializes" and "fragmentizes" our understanding of the brain, which is not conducive to forming a unified, holistic, and consistent understanding. To promote a "holistic understanding," I advocate giving up the obsession with "memory" and seeing the visual system, hippocampus, frontal lobe, etc., as a unified whole: they are achieving similar purposes with similar mechanisms.
How to understand the brain from a "holistic, unified" perspective is a major problem in the field of brain science. I have noticed that some scholars abroad are thinking in this direction, such as some scholars who regard the hippocampus as an extension of the visual cortex, which is a good example. Although this kind of thinking may not be published in CNS, it can promote a shift in brain science.