User Tools

Site Tools


papers:practice_does_not_make_perfect

Practice Does Not Make Perfect

Miriam A. Mosing, Guy Madison, Nancy L. Pedersen, Ralf Kuja-Halkola and Fredrik Ullén
Practice Does Not Make Perfect: No Causal Effect of Music Practice on Music Ability
Psychological Science 2014 25: 1795
DOI: http://dx.doi.org/10.1177/0956797614541990
(read this paper)

The relative importance of nature and nurture for various forms of expertise has been intensely debated. Music proficiency is viewed as a general model for expertise, and associations between deliberate practice and music proficiency have been interpreted as supporting the prevailing idea that long-term deliberate practice inevitably results in increased music ability. Here, we examined the associations (rs = .18–.36) between music practice and music ability (rhythm, melody, and pitch discrimination) in 10,500 Swedish twins. We found that music practice was substantially heritable (40%–70%). Associations between music practice and music ability were predominantly genetic, and, contrary to the causal hypothesis, nonshared environmental influences did not contribute. There was no difference in ability within monozygotic twin pairs differing in their amount of practice, so that when genetic predisposition was controlled for, more practice was no longer associated with better music skills. These findings suggest that music practice may not causally influence music ability and that genetic variation among individuals affects both ability and inclination to practice.

I ran across this paper in Rethinking Expertise, and I have to admit that I read it in a desire to find where they went wrong. I think twin studies and other genetically informed methods are among the best tools we have for understanding how individual variation affects the kind of people we end up being and the abilities we end up possessing. Well established results in behavior genetics are quite nonintuitive already, especially the second law of behavior genetics, that genes have a stronger effect than family environment (including wealth and social class), and indeed the effect of family is often so small as to be difficult to detect. I also have high hopes for statistical causal analysis, a fascinating technique with great promise for shedding light on many aspects of the human condition where experiments are impossible or unethical.

I am also an amateur musician who has practiced music on and off for going on 20 years of my 45 year life. I played trumpet, tuba and recorder in high school, then tuba again as a young adult, and for the past >8 years I have been a voice student, and singing in choirs. I am currently singing in a mixed professional/amateur choir with weekly rehearsals, taking weekly lessons, and doing very little individual practice.

To me, the title of this paper is nonsensical. If we define a cause as a thing which, if changed, would result in a changed outcome, then it is blatantly obvious that my musical experience has a causal role in my performance ability. I could not do what I do at all without that experience. And yet it is also clear to me that the teachers, accompanists, and directors that I've worked with have abilities that are so extremely superior to my own as to become qualitatively different. As in every other area of human performance, there is a complex interplay of genetics, voluntary actions and environmental opportunities, as argued in Rethinking Expertise.

What were the authors thinking when they wrote the title and abstract? I do applaud publishing all your results, however puzzling they may seem, but it would have been possible to present the results without seeming to favor an intellectually titillating but basically silly conclusion.

Does this result fundamentally call into question the validity of twin study methods and statistical causal analysis? Fortunately not. I think the most serious weaknesses of these results are specific to the methods used to study expert music performance, but it does also raise more general questions about the weaknesses of twin studies for expert performance.

The correct takeaway from this paper is clearly articulated in the discussion section:

The music abilities measured here can presumably be regarded as more general sensory capacities used to process musically relevant auditory information. In contrast, the skills that improve from playing an instrument may be more domain specific, involving the acquisition of instrument-specific sequential motor skills, score reading, and memorization. It is likely that the observed effects of music practice on the brain predominantly reflect the development of such specific skills, rather than the improvement of a general ear for music.

Face Validity

In other words, the “musical ability” referred to in the abstract is the results of a set of tests which moderately reliably measure “a thing” (construct) which is called “musical ability” in this area of research, but which in important ways is entirely different from what musicians are trying to learn when they practice individually or rehearse as a group. This criticism becomes much clearer when read in the context of the very fine discussion of transfer effects in Rethinking Expertise (which has authors in common with this paper). It is a highly reproducible result that transfer of trained skill between tasks is both weak and very sensitive to even small differences in the task.

Given this, how it is reasonable to suppose that training to perform music on an instrument should show strong transfer to the tested perceptual skills of pitch discrimination, and melodic and rhythmic recognition? These perceptual skills are important for music teachers and directors, and are indeed taught, but only at the college level, as part of professional preparation.

In the terms of testing theory, I think that the construct “musical ability” lacks Face validity. It does not represent what musicians seek to achieve, and it does not represent what audiences want to hear. “What an amazing musician! Did you see how he could tell the difference between those melodies?”

Who is an Expert?

There are very compelling practical reasons why perception tasks are used in studies of “musical ability”, and these come from a pronounced asymmetry between music production (composition and musicianship) compared to music perception (music appreciation). The commonsense understanding of this is “Listening to music is easy; making music is hard.”

At the most concrete level, it is far far easier to construct a reliable multiple choice or true/false musical perception task than it is to construct a music generation task.

What even is musicianship? What would we want to test? Mainstream music education (as taught in school) is oriented toward performing from printed music, yet of those who do regularly perform music, many specialize in popular music or other informal music traditions which emphasize learning by ear, and perhaps improvisation. In our music performance scale, what should we test? Should performing from a printed score be part of that? What about improvisation?

If we are looking at a broad population based study, such as from a twin registry, then we are really talking about “ordinary musicianship”. 1) These people won't be what we would usually think of as musical experts, but music learning is attractive as a model of expert performance because there are many people with some musical experience.

One caution is that most of the people who have practiced music at all did so decades ago, in their youth. If so, their trained expertise may have faded away. This would reduce the correlation between practice and skill.

What is Hard?

The question of what in music is easy and what is hard is actually a hard question. As Daniel Letvin argues quite convincingly in This is Your Brain On Music, we are (almost) all intuitive experts in music understanding. Music is intended to be pleasing and to communicate emotion, and good music is pleasing and does communicate emotion. Yet the rare cases of Amusia show that music understanding relies on some specialized neural mechanisms. Also, our almost universal expertise in music understanding is a culture-bound consequence of music as a cultural universal. Exposure to some style of music is almost inescapable, no matter where you grow up.

Someone who is not trained as a musician might well say: “I may not know much about music, but I know what I like.” Yet you, dear reader, really do know a lot about understanding musical performances, likely in the context of western tonal music generally, and within the forms of specific genres, such as “heavy metal”. In the modern world, you must wear earplugs as you go about your daily life to avoid being exposed to reasonably complex professionally performed music. I think it is likely that performance on music perception tasks is strongly influenced by experience, but the required experiences are nearly universal.

As I argue in Representational Opacity, the experience from Artificial intelligence has generated some very nonintuitive results about what is easy and what is hard. When we look specifically at music we see the same pattern, that what is easy for us (understanding music) is actually a far more difficult problem than what is hardest for beginning musicians (getting the right notes). It is really easy to get computers to generate note sequences. As soon as computers existed, people were using them to generate music. This tells us much more about humans, music and music perception than it tells us about computers. As of 2016, after nearly 70 years of computer music, software still struggles to even determine what notes are being played in a musical recording, or how to describe the lengths of the notes and the rhythmic patterns (such as meter) in formal music notation. Software is nowhere near being able to tell whether music is good music, or what feeling it conveys. Long term, I am optimistic about the prospects for AI, but it is clear that music understanding is a much harder task than mere music production.

Causation

1)
It's estimated there are 50-100 thousand professional musicians in the US, out of a population of 300 million, which is well less than 0.1% of the population.
papers/practice_does_not_make_perfect.txt · Last modified: 2016/03/14 21:49 by ram