Yesterday we had a nothing-special lesson, with no great outbursts of creativity, which nevertheless went very well and is the kind of staple lesson-setup that requires very limited preparation..
The goal: students should understand the concepts of tangents and normals to a curve, and be able to calculate the equations of these lines.
It's not exactly brain surgery, but then I find students often get lost in questions about tangents and normals: they have a hard time connecting the many relevant concepts: derivative, gradient, equation of a line, constant term, perpendicular, negative reciprocal, etc. They start doing funny things, like plugging in values of x into the derivative function instead of the original function in order to find the corresponding value of y. They lose track of what they're doing.
So for this lesson (as for almost all topics in calculus) we used algebraic and visual representations throughout, in parallel. I find it really helps students understand and keep track of their work, and check whether their work seems reasonable.
I put a "do now" question on the board: what is the equation of the tangent to the function..." and gave them a simple cubic function. Five minutes to work, in pairs, and everyone had found the gradient of the tangent, and many had also found its y-intercept. Some got a minus sign wrong, and could quickly see on the graph (which stretched close to, but not including the y-intercepts) that they must be mistaken. Go-through together and everyone's on track.
Follow up: is there any other point on the graph of f that has the same gradient of tangent? Five minutes pair-work, and most students set up, and at least attempt to solve, the resulting quadratic equation. A few needed a hint, because they attempted to set the equation of the tangent equal to its own gradient... Sure sign they were having a hard time connecting the derivative function with the gradient of tangent. This will be solved once we do more work on using the derivative for graphing the function.
Go-through together, and we're fine.
My only act of "telling" during this lesson was when I introduced the concept of a normal as a line perpendicular to the tangent at a certain point on the graph. A few students recalled that perpendicular lines have gradients that multiply to -1, and we were ready to go. There was a bit of the "do we use the same point? where do I plug this in?" going on, but when I brought their attention back to the graph on the board, they answered their own questions easily.
So all in all - students helped each other learn about tangents and normals, they worked efficiently during the whole lesson, and seemed to understand and enjoy the topic.
Friday, April 20, 2012
Tuesday, April 17, 2012
Inferential statistics - main ideas
In my opinion, inferential statistics is one of the most important and most difficult to teach of all the topics in high school mathematics and psychology courses. I get to teach it twice, in math, where the focus (unfortunately) is to manually carry out the Chi-square calculations, and in psychology, where the emphasis is on understanding the need for the test and interpreting the results. Psych students even do their own experiment, where one of the things being assessed is their ability to set up, justify, carry out, interpret, and discuss inferential statistical analysis of their data. It's a challenging task for most students.
So far, I've taught inferential statistics every year, and never felt really satisfied with the outcome. Yes, my students can copy my example to obtain a test-statistic and compare it to the critical value in the book. Yes, they can even say "thus we reject the null hypothesis." But rarely do they demonstrate true understanding. This year's attempt to teach inferential stats failed, as usual. Students complained so much about their lack of understanding (I love when they do that) that I decided to give it another, serious, try.
So for today, I considered the main difficulty in understanding inferential statistics. I think the main difficulty is understanding that random variation might create differences between groups that are due to chance. So I started with an object students know behaves randomly, a coin, and focused the lesson on the concept of random variation.
The setup: a normal coin, which I flip 15 times. Record number of heads and tails in a contingency table.
Then I "bless" it. I make a show of it, concentrating hard and blowing on the coin carefully in cupped hands.
Next I flip the coin another 30 times.
It turned out that before the blessing, the coin came up 5 heads and 10 tails. After the blessing it came up 13 heads and 17 tails. Oh my, my blessing made the coin come up heads more than twice as much! Students immediately complained that I should take into account the different number of flips in each condition, thank you students. We calculated percentages. 33% heads without blessing vs 43% heads with blessing.
Key Question: did my blessing work? Students were laughing at this, and suggesting wonderful things, like that the difference might be too small, and the sample too small, to be sure the results weren't merely due to chance. So how big should the difference be, for this sample, and how sure is "sure"? This led us into significance levels, and the need for statistical tests. We did a chi-square online (vassarstats is great for this) and when we saw that the p-value was 0.75 we concluded that the difference in heads was most likely due to chance. We experimented with changing the data a bit, say what if there was 29 heads and 1 tail in the "blessed" condition? Students agreed that would be more convincing, and voilá the p-value was less than 0.001.
That's nominal data. I also wanted students to experiment themselves, and to obtain ordinal data to use with a Mann-Whitney U-test. So I asked: "Are you telepathic?"
Students paired up. One student in each pair thought (but didn't speak) of a word, either "BIG" or "small". The other person then said a number, any number. The was tallied up in two columns according to the two words. At the end, I picked the data of one pair of students and calculated the medians. Oh my - the median for BIG numbers was 87.5, compared to just 15 for the small numbers. Students thought about this, could they be sure their classmates were telepathic? We did a Mann-Whitney U-test online (thanks again, vassarstats) and found a p-value of 0.009. Students were impressed. We concluded that we can be at least 95%* (or even 99%) sure that this pair of students were telepathic, except...
What if there wasn't a random variation causing the difference in results? What if the variation comes from confounding variable within the experiment? Students were saying that maybe the girl thinking of the word somehow consciously or unconsciously signaled the word she was thinking. Someone said humans have a hard time being truly unpredictable and random. So we arrived at the conclusion that more evidence is needed, and that statistical tests can only (at best) rule out that the difference is due to random variation but that there can still be other threats to validity present.
Overall, I am very pleased with this lesson. I am happy I chose a coin, even more happy I chose ESP - something many students are naturally curious about and have already considered in a somewhat relevant manner. Many students told me later that they finally got the idea, that it made sense, that it was obvious that descriptive stats is insufficient to draw conclusions about data. They could even transfer their understanding to psychology, to explain how participants in an experiment might be randomly different from each other or even compared to themselves at an earlier point in time. I am particularly happy with the telepathy-experiment. At first I thought that I should have made them flip a coin to decide what word to think about, to make it truly unpredictable, but because their choice of words wasn't perfectly random we had that very good discussion about internal validity and confounding variables which I think deepened students' understanding of the power and limitations of inferential statistical tests.
Some changes I'll do for next time: provide each pair of students with a computer so they can do the test themselves. Spend more time working with hypotheses and writing up the results of the inferential test. I want them to say "therefore the difference between conditions is significant and we should reject the null hypothesis" so we should have spent more time saying, and thinking, about this statement and what it means.
*Yes, I know that this is an incorrect interpretation of significance level. I know, and it hurts me to teach it this way. But seriously I think I must, at least to begin with, because students are simply not ready/able/given enough time to fully understand the concept of significance according to the frequentist approach to statistics. I comfort myself with the thoughts that hey, priorities gotta be made, and that perhaps, if looked at from a Bayesian perspective, what I'm teaching my students actually makes sense. It's a hard decision, though.
Monday, April 2, 2012
Perplexing!
I've recently had the opportunity to peruse a substantial amount of research articles about international differences in mathematics knowledge, as measured by TIMSS and PISA. I found some very interesting things in there, such as that amount of time spent on homework has a negative correlation to mathematics achievement both within and between nations. Meanwhile, frequency and effort put into homework has a positive correlation with mathematics knowledge. That's all good and great and I'm already changing how I talk to students about homework, but other results from the research studies are just bewildering:
- If a student likes math, and believes in her own ability to do math, that's gotta mean the student is more likely to develop good understanding of math, right? Well, not really. Within nations, this relationship holds, and in some nations (such as Finland) the correlation is positive and quite high. But between nations, the relationship is actually the opposite: students in high-performing nations report that they like math less and consider themselves to not be good at it, compared to students in low-performing nations (Shen, 2008).
- A student who is persistent with finishing tasks is likely to learn more math. That, by itself, is not weird. But Boe (2002) found that task persistance (as measured by the number of background questionnaire items answered by students in the TIMSS 1999 study) has a high correlation to mathematics achievement between nations, but less so between classrooms and very little between students. So nations in which students answered many of the background questionnaire items, which require no knowledge of mathematics or science, did better than nations in which students answered only a few of the questions. The correlation was around 0.75. On a student level, when comparing students within classrooms, the correlation was much lower. Overall, this "task persistance" variable seems to account for about 1/3 of the overall variation in results among students worldwide, and about half of the variation between nations. This is BIG. To my knowledge, no other variable has been found that explains so much of the variation. But what does it mean? Does it reflect cultural values of conscientiousness and long-term orientation (would explain why East Asian nations do so well)? Or is it that students who expect to do well on the TIMSS are more motivated to fill in the questionnaire? And WHY is the relationship strong at the nation-level but not student-level? And why, given the stunning results, has this study been cited only a handful of times since it was published 10 years ago?
- One reason that Swedish researchers are interested in the TIMSS background data is that Sweden has seen a dramatic drop from acceptable to outright poor results (still better than the US, though) from TIMSS 1995 to TIMSS 2007. "Why is this happening?" we're asking. Well. Hidden among the data are little-known figures such as these: Swedish 4th-graders receive almost 30% less mathematics teaching per year than the OECD average. 8th-graders receive 20% less than OECD average. Meanwhile the Swedish media and government has aggressively blamed teachers for the poor results. To be fair, there is not a strong correlation between amount of teaching hours and mathematics knowledge. Finland, for example, ranks very high but has the fewest teaching hours of all the participating nations. US, on the other hand, has plenty of teaching hours, yet achieves very low ranking. Yet it's difficult to ignore that East Asian nations and Russia, who always top the ranks-lists, not only provide students with a LOT more teaching (South Korea for example gives students 220 teaching days each year, compared with 178 in Sweden), but also in these nations many students go to after-school mathematics tutoring.
That's it for now.
Boe, E. E., May, H., & Boruch, R. F. (2002). Student task persistence in the Third International Mathematics and
Science Study: A major source of achievement differences at the national,
classroom, and student levels (Research Rep. No. 2002-TIMSS1).
Philadelphia, PA: University of Pennsylvania, Graduate School of Education,
Center for Research and Evaluation in Social Policy.
Shen, C., & Tam, H. P. (2008). The
paradoxical relationship between student achievement and self-perception: A
cross-national analysis based on three waves of TIMSS data. Educational
Research and Evaluation, 14, 87–100.
Subscribe to:
Posts (Atom)