Conceptual Relations: Musical Representations do not need Music Theory

A theory using conceptual relations for musical design processes is proposed. It is suggested that, for non-musicians, musical representations mostly arise, while listening, through the process of abduction, including combinations of conceptual relations and influenced by emotional ratings. 

Author(s)

INTRODUCTION

The German sociologist, musicologist and composer Theodor W. Adorno states about the musical listener:

Its horizon is the concrete musical logic: one understands what one perceives in its course, but certainly not in a literal causal necessity. Location of this logic is the technique; the listener who is introduced in music theory conceptualizes individual elements, of what is heard, most immediately as technical, and in technical categories, revealed much of the meaning. [1]

This statement first implies that the construction process depends on explicit learned musical logic. Secondly that, in order to understand musical processes, listeners have to develop categories of individual sound elements. Thirdly, that many musical meanings are revealed only after having learned musical logic.

But, on the other side, music can offer different grounds and levels of understanding. In this way, musical thought does not follow the rules of formal logic normally. 

 People use their implicit understanding of how the world works to understand structures, rather than the formal principles of logic. [2]

From this point of view, musical meaning certainly depends on musical learning processes, such as musical logic, but the implicit understanding of how the world works is an important argument for an extended view on musical construction.

From fundamental concepts such as top to bottom, left to right, every one develops ever-expanding concepts of how the world works. In addition, Gilles Fauconnier and Mark Turner have shown that, during the connection of various concepts, such as is constantly performed during musical construction, room always exist for many different possible lines of elaboration.

However, assessment and recognition of elements or processes, whether conscious or unconscious, are also caused by emotional states. Therefore,

 It appears likely that emotion and cognition are two sides of the same coin of elaboration. [3] 

Consequently, to discover and understand the virtual relations between sounds, or, that is to say, build their significance, emotional states must be involved.

When musicians, or non-musicians, describe parts of music as sad, happy, elated, and so on, they are referring to emotional states. Such states are the products of complex processes, and imply that elements and sound structures were recognized, rated and related to each other.

However, although both musicians and non-musicians develop emotional states during the course of listening, only the musicians have access to explicit learned musical logic. Hence, it is important to investigate how listeners lacking this musical logic distinguish emotional states during the course of listening to music. 

The human being conceptualizes and classifies the experience of the ‘inner as well as the outer world’ because envisaging and choosing occur primarily in terms of such classes, to wit in case classes of emotions or cognitive categories. [4]

These classification and categorization of information processes are a general mental operation of human beings. The first reason for this, in a psychological sense, is that it is necessary to foresee the probability of coming events, whether the behavior of people, or of musical patterns. David Huron proposed, for example, to classify expectation emotions in five psychological systems. Secondly, psychological behavior addresses different biological problems aiming at adapting the behavior to changes in the environment.

From this point of view, we propose that musical representations are constructed as a combination of conceptual relations between classes or categories. And this construction is performed during classification or categorization, caused by unconscious and conscious ratings of possible relations. Such is the functionality of the nervous system. The nervous system can only process patterns: pattern comparison allows the creation of different classes, and of their relations to each other.

WORKING OF THE NERVOUS SYSTEM IN RELATION THE MEMORY

Categorization is not only a psychological organization of the environment, but is already created during the information processing in the nervous system.

Human beings compare and relate information processes by using memory. Memory influences how we decide when groups of events end, and other groups begin, and how these events are conceptually related. It also allows everyone to comprehend sequences of events in their totality, and to build expectations about what will happen next.

From the current physiological perspective, memory is the ability of nerve cells in the brain to alter the strength and number of their interconnections in ways that span over time. However, memory is not a monadic entity, but is understood as a subdivided system that depends on content and time. It is carried out by different neural networks, which process acquisition, storage, consolidation and deposition in very different structure combinations. Therefore, memory processes are functional rather than structural.

CATEGORIZATION IN THE AUDITORY MEMORY FROM THE COGNITIVE PERSPECTIVE

In the auditory memory, the inner ear converts sounds into trains of nerve impulses that represent the frequency and amplitude of individual acoustical vibrations. [5]

This information is usually stored less than a second in the auditory ultra-short-term memory, and is not categorized in any way, but persists as continuous sensory data. Jamshed J. Bharucha discovered 1999 that many specialized groups of neurons extract individual acoustical features such as pitch, overtone structure, etc. from the continuous data of ultra-short-term memory.

During the subsequent process, these features are bound together with different simultaneous features, and correlated into single auditory events. [6] 

Gerald M. Edelman deduces that feature extraction and perceptual binding reduce the large mass of sensory information, and elaborate a perceptual categorization. Certain perceptual events activate conceptual relations in those parts of the long-term memory that have been activated by similar events in the past.

 This can take place in a conscious or unconscious way. But most of the content of long-term memory is unconscious and forms an activation context for current awareness. [7]

This context, or conceptual relation between single auditory events and the past, takes the form of expectations and other related knowledge that can influence the direction taken by current consciousness.

Some information from the long-term memory reaches consciousness by the highest state of perceptual activation. Therefore, current awareness can consist in a vivid perceptual categorization and conceptual categorization arisen from long-term memory.

Information that has just been in the focus of conscious awareness may then persist as short-term memory, where it is no longer in consciousness, but will be held ready for recall. This recall availability lasts only about 3-5 seconds (sometimes longer) on the average, unless the information is rehearsed, that is, recycled through the process of conscious awareness.

At this point, there seems to exist a difference in the construction process between musicians and non-musicians. For instance, musicians have access to previously learned sound schemata, and may group sounds according to those. Because of the limitation of 7 ± 2 actively held chunks in the short-term memory, non-musicians can not group certain chunks in a pre-learned superior musical schema. Their musical schema-driven-grouping is thus limited, and the relationships constructed over longer time spans appear to be fewer and less pregnant than for musicians. However, explicit schema-driven-grouping is only one path to musical construction: it is also possible to relate sounds and tension-release patterns in music, without naming them explicitly.

For both, musicians or not, if information are novel in some way or, to be more precise, if conceptual relations between sounds build a novel significance, then this novelty joins other conceptual categories in the long-term memory.

As items are compared, however, an important factor also depends on the emotional process of evaluation. Emotion also dictates why the perceptual activation selects certain particular information from long-term memory. 

CATEGORIZATION AS A PROZESS OF MEMORISATION FROM THE PERSPECTIVE OF EMOTION

From an accepted perspective:

 Individual states, or categories of emotional behavior, are appropriate functions, or responses to changes in the environment that have worked over the course of evolution. [8] 

LeDeux distinguishes two kinds of response to stimuli, namely ‘Congenital trigger’ and ‘Learned trigger’.

 The first is an evaluation mechanism, given to us via evolution, designed  to detect a particular pattern and trigger reactions that function to prepare an organism for changes in the environment. The evaluation mechanism can also learn from stimuli and often function in connection with the congenital trigger from which the reactions are expected. These are called learned triggers. [9]

Unconscious or conscious expectation processes are fundamental requirements for human survival. Human behavior is continuously guided and optimized by anticipatory responses. But,

 The expectations we form while listening to music have no obvious implications for survival, even though a certain degree of arousal, such as tension generated by anticipated events, is basic to musical functionality.” [10] 

Arousal alone is not the result of emotion. The physiological effects of arousal may occur during and after music listening, but the result is not the emotional response.

For LeDeux, emotional experience caused by listening to music is a ‘Learned trigger’.

 In addition, in George Mandler’s (1984) view, held also by Leonard B. Meyer (1956), incongruities between expected and actual events lead not only to arousal response, but also to a cognitive reevaluation of the stimuli. It is the combination of arousal and cognitive activity that leads to an emotional experience.” [11]

 That point is important because, conscious or unconscious classification, explicitly named or not, tend to undermine and weaken emotional response. Therefore, we can say, in the case of non-musicians and of musicians also, the memory helps to compare perceptual and conceptual information, to store and retrieve classes and categories derived and reduced from pattern. But how information are related to each other, during the course of listening, also depends on individual emotional ratings of possible relations to what will happen next, and could emerge in the consciousness as emotional states. 

CONDITIONS AND FUNCTIONALITY OF CONCEPTUAL RELATIONS BETWEEN SOUNDS

Starting from the basic function of memory for human beings, which is to be prepared for anticipated events, listeners mostly evolve musical representations through a process of abduction there are also situations in which musicians and non-musicians cannot focus on music. Therefore, it seems possible, under certain circumstances, for listeners to hear music in a stream of consciousness mode.  

Abduction means that acoustical information, in the first step of reasoning, generates a hypothesis about a probable musical concept, in the form of a prediction referring to a certain expectation horizon. The possible consequences of this hypothetical concept can be determined deductively, and its musical elements searched inductively. Regarding the process of perceptual categorization in the auditory memory, it is likely that, already in these early processes, the nervous system tends to create a hypothesis about what will happen next. This happens because certain perceptual events activate conceptual relations, in those parts of the long-term memory, which were activated by similar events in the past, and/or were already conceptualized in relation to other events. That is an important analysis, in the sense that conceptual relations assign a present identity to acoustic information because of individual experience. They do not possess a priori a static identity. In addition, a constructed present identity is a product of various conceptual relations based on models of cause & effect, prediction, space, time, role, identity, analogy, change, role, property, category which, depending on their weighting, could produce different avatars of a same musical identity or concept.

The Figure 1 shows an abduction process during the course of listening. That is to say a process of conceptual relations, in the scope of current awareness, between three perceptual and conceptually categorized acoustical information activated from the long-term memory. A certain degree of physiological conditions is assumed, such as arousal and tension.

At time t 0, the acoustical information receives a present identity, through the process of memorization and retrieval. This identity construction, explicitly named or not, provides the basis of a hypothesis about the plausible musical concept. Depending on their musical experience and on their current individual constitution, non-musicians and musicians alike create a mix of different conceptual relations, in the form of a deductive analysis aiming at anticipating the identity of some future acoustical information. That mixture produces emotional states and, for instance, could include relations of similarity, cause & effect, time, role, property, category, or explicit musical schema-driven-grouping with possible events in the future.

During this stage of inductive search, it is possible, both for non-musicians and musicians, that the acoustic information heard at t 1 and/or t 2 change the mixture of conceptual relations to previous acoustical information, in the sense that the anticipated concept becomes less likely, or unrealized. This means that perceptual and conceptual categorized acoustical information, activated from the long-term memory, send their present identity to the mixture of conceptual relations, and initiate a new abduction process, which could change physiological conditions such as tension or arousal, musical concepts and/or emotional states. 

In terms of musical perception of time, the conditions outlined above, and the functionality of the musical construction are important for an extended perspective. Musical time is not composed of a density of differentiable events, but results from the succession, and possible repetitions, of abduction processes.

CONCLUSION

With the resources of modern science, we have tried to shown in this paper that classification and categorization of information processes is a general mental operation in human beings. Hence, we suggested, non-musicians also have the ability to rate, separate and group individual sound events in relation to musical representations. Those representations can be different from those of trained musicians, as regards their structure, time span, and effect.

Our main point was to sketch an initial draft of a new musical theory, based on mixture of conceptual relations, which, in a process of abduction, assign their dynamic identity to acoustical information. In addition, such relations are responsible for musical concepts and emotional states arisen while listening to music, and for the musical perception of time.

References and Notes: 

  1. Theodor W. Adorno, Einleitung in die Musikpsychologie (Frankfurt: Suhrkamp, 1962), 12.
  2. D. Kahneman, P. Slovic and A. Tversky, Judgement Under Uncertainty: Heuristics and Bias (Cambridge: Cambridge University Press, 1982).
  3. Joseph LeDoux, Cognitive-emotional Interactions in the Brain,” in Cognition and Emotion 3, no. 4 (1989): 267–289.
  4. Leonard B. Meyer, Music and Emotion: Distinctions and Uncertainties,” in Music and Emotion: Theory and Research, eds. Patrik N. Juslin and John B. Sloboda (New York: Oxford University Press, 2001), 341-360.
  5. Pierre Buser and Michel Imbert, Audition (Cambridge, MA: The MIT Press, 1992), 156–171.
  6. Albert S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound (Cambridge, MA: The MIT Press, 1990), 213–394.
  7. Bernard Baars, A Cognitive Theory of Consciousness (New York: Cambridge University Press, 1988), 137-176.
  8. Phillip N. Johnson-Laird and Keith Oatley, Basic Emotions, Rationality, and Folk Theory,” in Cognition and Emotion 6, no. 3-4 (1989): 201–223.
  9. Joseph LeDoux, Das Netz der Gefühle: Wie Emotionen Entstehen (München: DTV, 2001), 137.
  10. Jeff Hawkins, On Intelligence (New York: Henry Holt and Company, 2004).
  11. William Forde Thompson, Music, Thought, and Feeling: Understanding the Psychology of Music (New York: Oxford University Press, 2009), 132.