Trying to find math inside everything else

Archive for the ‘assessment’ Category

Rubrics for Standards

So my grading experiment has been going on for a month now, and so far I think it’s going well. But I was pretty stressed about getting it up and running, because a lot of the work was front-loaded. The thing I was particularly working to get done was my mega-rubric. I wanted to make a rubric that showed what exactly students needed to prove they understand to move up a level in a particular learning goal.

So here’s what I made (I call it the SPELS Book to go along with the students’ SPELS sheet):

I started by making the proficient categories, and for the first 8 (The Habits of Mind/Standards of Practice) it was pretty easy to scale them down to Novice, and then to add an additional high-level habit to become masters.

I was stuck, though, on the more Skill-Based Standards. I had all the things I wanted the students to show in each category, but how do I denote if they “sometimes” show me they can graph a linear equation? If I was doing quizzes all the time, like in the past, I could say something like “70% correct shows Apprentice levels.” But I wasn’t, and it seemed like a nightmare to keep track of across varying assignments.

So instead, my co-teacher had the idea that, if each topic had 4 sub-skills that I wanted them to know, we could rank them from easiest to hardest and just have that be the levels. So my system inadvertently became a binary SBG system, but still with the SBG and Level Up shell. Now if a student shows they understand a sub-skill, they level up. If they don’t, I write a comment on their assignment giving advice on what they should do in the future. What remains to be seen is how much they take me up on that advice. We’ll see.

Also, I’d LOVE any feedback you have on the rubric, and how I can improve it. Thanks!

Downloads

SPELS Book (pdf)

Updated Student Character Sheet (pdf)

Updated Student Character Sheet (pages)

Habits of Mind, Standards of Practice

For the past three years, I’ve loosely organized my classroom around the Mathematical Habits of Mind which I first read about in grad school at Bard. I would give the students a survey to determine which habits are their strengths and which are their weaknesses, group them so each group have many strengths, and go from there. Last year I even used the habits as the names of some of my learning goals in my grading.

As I was planning for this year and the transition to the Common Core, I was thinking about how to assess and promote the Standards of Practice. And I realized that they are very similar to what I was already doing with the Habits of Mind. In fact, having a habit of mind would often lead to performing a certain practice! In that way, the SoP are actually the benchmarks by which I can determine if the habits of mind are being used.

Let me demonstrate:

Students should be pattern sniffers. This one is fairly straight-forward. SoP7 demands that students look for and make use of structure. What else is structure but patterns? Those patterns are the very fabric of what we explore when we do math, and discovering them is what leads to even greater conclusions.

Students should be experimenters. The article mentions that students should try large or small numbers, vary parameters, record results, etc. But now think about SoP1 – Make Sense of Problems and Persevere in Solving Them. How else do you do that except by experimenting? Especially if we are talking about a real problem and not just an exercise, mathematicians make things concrete and try out things to they can find patterns and make conjectures. It’s only after they have done that that they can move forward with solving a problem. And if they are stuck…they try something else! Experimenting is the best way to persevere.

Students should be describers. There are many ways mathematicians describe what they do, but one of the most is to Attend to Precision (as evidenced in things like the Peanut Butter & Jelly activity, depending on how you do it.) Students should practice saying what they mean in a way that is understandable to everyone listening. Precision is important for a good describer so that everyone listening or reading thinks the same thing. How else to properly share your mathematical thinking?

Students should be tinkerers. Okay, this one is my weakest connection, mostly because I did the other 7 first and these two were left. But maybe that’s mostly because I don’t think SoP5 is all that great. Being a tinkerer, however, is at the heart of mathematics itself. It is the question “What happens when I do this?” Using Tools Strategically is related in that it helps us lever that situation, helping us find out the answer so that we can move on to experimenting and conjecturing.

Students should be inventors. When we tinker and experiment, we discover interesting facts. But those facts remain nothing but interesting until the inventor comes up with a way to use them. Once a student notices a pattern about, saying, what happens whenever they multiply out two terms with the same base but different exponents, they can create a better, faster way of doing it. This is exactly what SoP8 asks.

Students should be visualizers. The article takes care to distinguish between visualizing things that are inherently visual (such as picturing your house) to visualizing a process by creating a visual analog that to process ideas and to clarify their meaning. This process is central to Modeling with Mathematics (SoP4). It is very difficult to model a process algebraically if you cannot see what is going on as variables change. To model, one must first visualize.

Students should be conjecturers. Students need to make conjectures not just from data but from a deeper understanding of the processes involved. SoP3 asks students to construct viable arguments (conjectures) and critique the reasoning of others. Notable, the habit of mind asks that students be able to critique their own reasoning, in order to push it further.

Students should be guessers. Of course, when we talk about guessing as math teachers, we really mean estimating. The difference between the two is a level of reasonableness. We always want to ask “What is too high? What is too low? Take a guess between.” Those guesses give use a great starting point for a problem. But how do you know what is too high? By Reasoning Abstractly and Quantitatively, SoP2. Building that number sense of a reasonable range strengthens our mathematical ability. We need to consider what units are involved and know what the numbers actually mean to do this.

What we do, or practice, as mathematicians is important, but what’s more important is how we go about things, and why. A common problem found in the math class is students not knowing where to begin. But if a student can develop these habits of mind, through practice, that should never be a problem.

Level Up! +1 to Exponents, +2 to Equations

Previously on The Roots of the Equation: You All Have “A”s, You All Have “0”s, and Grade Out of 10? This One Goes to 11.

I like games. All kinds of games: video, board, tabletop, role playing. And so I often think about how games and teaching align. One thing (good) games really do well is provide a sense of progress (especially role-playing games). You start off with not many skills, but as you advance you build them up, learn new things, and can conquer tougher tasks. By the time you reach the end of the game, those things that were hard from the beginning ain’t nothing to you now.

Games don’t usually score you on every little thing that you do. What they do is take a more holistic view and then, at some point, say that you’ve done enough to go up a level. And I say, why can’t I grade that way?

Many people have lamented that the best grading system would have no grades, just feedback that students respond to to improve their learning. But grades are required from external factors: school districts, colleges, parents, principals. But maybe there’s a way around that.

Last time, I said grades should just be a sum of the levels of the learning goals. So now I’m picturing students having a “character sheet” that looks something like this.

I maybe have created that name just so I could tell students to take out their SPELS sheet.

I maybe have created that name just so I could tell students to take out their SPELS sheet.

Student Character Sheet 2

The N/A/J/P/M are my current grading system, Novice –> Apprentice –> Journeyman –> Proficient –> Master

At the beginning of the year we can do a pre-assessment to determine their “starting stats and skills.” Then as the year moves in, we do our work in class. But none of that worked is graded in the usual sense. We would write feedback on the assignment, giving areas for improvement, but the only time a grade is mentioned is when a standard improves. Even then, we don’t focus on what they are (“You now have a 3 in Exponent Rules”), but rather in how they’ve grown (“You gained one level in Exponent Rules!”). The former just highlights that they are not the best they could be. The latter highlights their constant growth and improving.

(Then, at the end, based on what I said in the last post, their grade is literally how many boxes are shaded on the sheet. Have 75 boxes shaded? That’s a 75.)

In order to do this effectively, what we really need to have are rubrics for each standard. That way we know what counts as evidence of a certain level in a standard across all assignments, so it doesn’t matter which assignment provides the evidence. The upside to this is that you do not need to then have a rubric for each assignment! You only need your standards rubrics, because that is all you are using. (The collection of these rubrics, then, in the hands of the students, are a road map to success.)

I’m pretty excited by this idea, and can’t wait to try it next year. This is my idea from the last two posts taken to the next level, with a clear focus on growth, and not deficit. We can’t get rid of grading, and I’m not 100% convinced that we should. But we can definitely minimize the damage that it does and use it to actually promote students’ learning. All we need to do is focus on how we always get better.

Grade Them Out of 10? This One Goes to 11

Previously on The Roots of the Equation: You All Have “A”s, followed by You All Have “0”s.

I talked about how I currently grade (or, more specifically, how I tabulate grades) in my last post, but I don’t want to give the impression that I’m totally satisfied with the system. It was a great core idea, but is missing something.

When I first started student teaching, my mentor teacher’s school has just adopted a grading system called EASE (Equity and Access in Student Evaluation), essentially introducing me to SBG from the get-go, before I really knew what it was. Because the whole school used EASE (which had a 3-point scale: not yet proficient, proficient, and highly proficient), the report card could just display the list of standards and the proficiency level. But when it came time to send transcripts to colleges, they still needed to have final grades. So those were calculated based on the percentage of standards with a P or a HP.

However, you did not need to be highly proficient at every single standard in order to get a 100. That goal was achieved by earning HP for half the standards and P for the other half. But my current system (and possibly many SBG systems? Let me know) requires mastery of all learning goals for that A+. And that’s really hard to do! Why so we expect a student to be perfect at everything? No one is.

One way to deal with this is to weight mastery (5 on a 5-point scale) as worth more than it is. But that seems like a sloppy way of doing it. There must be something more elegant. And then I had the following thought:

Why average the standards, and then scale up to 100? Why not just add up the score? And then, if the problem is requiring all 5s to get to 100, why not just have more than 20 standards?

This requires thoughtful choices, but I think it has a lot of potential. Let’s walk through an example. Say I grade on a 5-point scale. If I have 20 standards, a 5 on each gets me a grade of 100. But what if I have 22 standards (sat, 8 standards of practice and 14 content)? Then someone who gets a 4 on every standard gets an 88, a B+. If then they turn half of those into 5s, that’s a 99, A+. Someone who has a 3 on everything, so some fatal flaw in all of their knowledge, but decent understanding, gets a 66, a D. This seems reasonable to me.

If you grade on a 4-point scale, you could have 28 standards. Unless your 4-point scale is 0-3 instead of 1-4, then you could have 40! The choice is yours exactly how you break it down. But I think the idea have potential. Am I totally off?

(To be clear, I’m not letting my grading system determine what standards I teach. I already break down complex standards and combine simple ones, until I find ones that fit my class best. Now I’m just having a target number of standards for that process.)

You All Have “0”s

Last time, on The Roots of the Equation: You All Have “A”s.

To follow-up on my last post about grading, I wanted to talk about what I do in my class. What I do is applicable to all classrooms, whether they use SBG or not.

As I said last time, the promise of SBG is to promote a growth mindset with regards to grading: instead of being penalized by mistakes, you earn for proving you understand the standards and your grade rises. However, the responses I received belied that idea. When I asked what you would tell a student who asked their grade mid-marking period, most referred to something like a “snapshot” of their grade, simply averaging whatever they’ve done so far (whether it is standards in SBG, or test and projects and HW in more traditional grading).

If a student gets that snapshot every day, then it is quite clearly going to fluctuate and lead to some distress. Since my school uses on online gradebook, students can, in fact, check it. But I wanted my promise of rising grades to go through. So, I had to make it actually happen.

On the first day of class, I tell all my students they currently have a 0. Instead of 100 and dropping, every single thing they do in my class that is assessed will improve their grade. Even if they do terribly on an assignment say, getting a 50, that still improves their grade, because 50 is higher than 0.

That actual implementation of this, however, is hard. It means that, at the start of every marking period, I need to think ahead about what things I’m going to be assessing for the whole 6 weeks, and then enter those into the gradebook with a grade of 0. That way, everything will start at 0 and go up when actually completed. (Students can still see how they’ve done on things completed so far, and can determine their own “snapshot average” if they like, but this gives the view of the whole marking period.)

On the left, averages and assignments we have already completed. On the right, U grades mean “Unrated,” usually for assignments we have not done yet. The student who got an A- last marking period currently leads the pack with a 60.

But…thinking ahead 6 weeks about what I’m assessing…shouldn’t we be doing this anyway? Isn’t that just unit planning? My current Algebra course has 7 units, so it does work out to be almost one unit per marking period. And the process isn’t that inflexible: if I delete an assignment because I decided not to do it, or add something in, that’s a small fluctuation compared to the overall experience.

By the end of the marking period (as you see in my picture), everything will match up to the number it would have been had I gone top-down. But the way we get there is important. It is always better to grow.

ADDENDUM

After being questioned by Andrew Stadel and Chris Robinson on Twitter, I have some more explanations.

Andrew Stadel: I’d like to know more about this. Admin & parent understanding? Student response? Pros, cons, etc.

Me: Parents felt it was unclear at first, until I input marks that differentiated between “not done or graded yet” and “missing.” Then they were more on board. Students were confused by it at first, but liked it in the end. Admin supports it.

Pros include feeling like we are always improving and, a big one, it makes grading so much more enjoyable for me, because no one goes down.

Cons are that it’s hard to gauge sometimes (in terms of “snapshots”), especially when you get a big rush of grades at the end of the marking period.

Chris Robinson: James, can your “grades” go down per individual standard/learning target through the term?

Me: I’ve seen it go both ways in SBG. For me, they can’t go down in content standards, but can in practice ones. I do continuously assess but I feel like once someone has shown some understanding, they keep it, and they just need a refresher. (But I think I got that from Dan Meyer’s original “How Math Must Assess” post.)

Stadel: Thanks for explaining. What percent of students adjusted to & welcomed it? I like the premise of zero understanding and working towards mastery.

Me: Adjusted to, I would say over 95%. Welcomed, in the 80%. (Super rough estimates.)

Stadel: Do you have any materials/handouts explaining the philosophy to parents & students?

Me: I…really should.

You All Have “A”s

So I was thinking about grading a little bit, and how grading works in my classroom. I tried to ask people about grading on Twitter, but perhaps the medium is not the best for talking about it, because only one person responded. (Thanks, @algebraniac.) I wanted to get a feel for how people out there calculated grades, before I wrote about it, but I figure, what the hell! Just write about it anyway! (Maybe channeling Hedge a little bit here.)

So, like, I’m imagining the typical first day of class that happens. The teacher tells all the students, “As of right now, you all have ‘A’s.” With the intention being, of course, encouragement, because despite how bad they might have done in that subject in the past, right now, they have an A.

But when you think about it a little more…it’s really kind of terrible, isn’t it? “Right now, you have an ‘A’…and the only way to go is down.” So then the grades don’t reward good work, they only penalize bad. Your grade tracks every mistake you make, every little fuck-up, dropping in a downward spiral. And we talk about students “slipping” and “dropping the ball” and “not doing as well as they used to.” The whole terminology is pretty terrible.

On the surface, it might seem like Standards-Based Grading can help with this, like it helps with so many others. Students have standards, and if they are low they reassess and go up. At the end of the marking period or term, that certainly seems like a good system. For each individual standard, it works, but as a collective whole? Let me ask you this:

It is halfway through the (quarter/marking period/term), so report card grades are not due for another few weeks. A student comes up to you and asks what their grade is. What do you tell them? What is it calculated from? And how will the future work they do affect that grade, if they do well? What about if they do poorly?

I’d really like to know. Drop a line in the comments and tell me. I’ll follow up with people’s responses and what I do in another post.

 

Math Practical

I was proctoring the Earth Science Regents exam today, and after the students finished I had to direct them to go take the Practical part of the exam in another room. And it got me thinking: why is Earth Science the only exam with a practical? Certainly at least the other sciences should.

Then I thought, well, the foreign language exams do have practicals: they have both a listening section and an oral section, as well as reading and writing. That’s everything. Same for the English exam. And, in a way, the social studies exams do to, in that the DBQs could be considered practicals in that historians work by analyzing various documents. (Though a research aspect would be more pactical.)

So then that got me thinking about having a Math Practical as an exam. It’s totally doable, and I think it would be an interesting idea. How would it work? Here’s an example:

Student goes into a classroom. The proctor hands over supplies: a measuring tape, a clinometer, a Home Depot circular, and a calculator. The exam question is simple: how much would it cost to paint this room? And included must be a margin of error on their calculation. So they need to measure length and width properly, use a clinometer and trigonometry to get the height of the room, calculate surface area, calculate and subtract non-painted areas, turn that surface area into gallons of paint, and then that into a cost. They may even need to calculate exposed surface area of things like cylindrical pipes, too. It’s all math content, but something that is actually done.

Maybe I’ll implement it next year as an exam. If only I didn’t teach trig and surface area right before the Regents….

My Final Exam

My second post was about the game Facts in Five and how I thought the scoring system would be helpful for my assessments. I had also been having thoughts about the way to measure synthesis while using SBG. So I thought having a final exam specifically designed to measure synthesis would be the best way to go about it. Here’s how I went about it. (This was the final for the Fall semester, since for the Spring they have the Regents.)

In each bin, I put a slip of paper containing a question. Students will go to the bins and choose which questions they would like to answer, and compile them into a coherent exam.

Those aren’t fractions on each bin label, though. They denote which Learning Goal each question consists of. Instead of having each Learning Goal have its own questions, they mix. But each goal still has 4 questions that apply to it, like so:

Not every topic can be combined with others, but now the student can choose which goals to work on: either they can try to improve a Learning Goal that they got a lower grade on, or pick ones they did well on and show they can perform Synthesis, which is above mastery. But, of course, all of these questions are harder than what they’ve done before.

To score the exams, I use the same scoring system as in Facts of Five, with students squaring what they get right in each Learning Goal. So they will get more points by focusing on completing a goal, instead of jumping around. An example:

Here this student got a decent score by focusing on completing four of the learning goals (9, 11, 16, and 18), and receiving assorted other points.

I definitely like the idea here, but I do need to refine the delivery. It was hectic. But I did not want to print out all 44 questions for everyone, when not everyone will do all of them. That would be a lot of paper. Suggestions are welcome.

Design for Tracking Progress

When the ISSN came and saw my school, they had a document to show our growth in several different categories that I thought looked cool. It consisted of a circle with various rings to show your level in a category, and split into wedges for each category. The person presenting it, however, pointed it out that is was fairly flawed, because the outside ring looked so much bigger than the inside ones, so even when you reached 3/4 it looked fairly empty.

Of course, the problem was purely mathematical. Whoever had designed the chart had split the radius of each wedge into 4 parts equally, so that the first ring started 1/4 of the way down the radius, the second ring was 1/2 of the way, etc. Clearly that will make the areas very different. So I quickly made up a version where the areas were proportional, which isn’t too hard in graphing software, since the formula for a circle uses r^2 anyway.

Afterwards, I decided to make one for my own class. I also decided that, because each mark (Novice, Apprentice, Proficient, and Master) was not weighed the same (for example, Novice is a 50, worth a whole lot more than any other category from the start), I would have the areas of the circles have similar weights. Here’s what I made:

Learning Goal Checklist – Spring Semester

Facts in Five

On my last visit to my parents, I brought home a game we used to play when we were younger that I loved. It was called Facts in Five. I sat down today to take a look at it and the rules, reminded myself of how to play. (All I recalled was that it was like Scattergories, but better.)

For those who don’t know, a quick overview: in Facts in Five, players draw cards with to pick 5 categories and 5 letter tiles. The categories and letters are set up in a grid so that, once the time starts (5 minutes), you have to fill in answers that match the letter (rows) and the category (columns). So that’s five answers per category, five answers per letter, 25 answers total.

What really struck me was the scoring system. Instead of just tallying the number of answers, the grid itself contributes. If you have one answer in a column, you only get a point, but if you have two, you get four, 3 gets you 9, etc. Same works for rows. That way it is much more valuable to fill out one column completely (30 points) than to have one right in each column and row (10 points), even though it’s the same number of answers. Having a deep and complete knowledge of a category is more worthwhile than a weak knowledge of several topics.

This is the same thing I tell my students when I give them tests. My tests are usually split into sections based on the learning goals/standards they need to show mastery of. It is better to ace one standard and be done with it then to muddle through, especially in the limited time of a period, so I tell them to focus on the topic they know.

What I wonder is how I can perhaps implement Facts in Five into my assessment system, since it’s scoring system is an inherent way of supporting my system and is less holistic than what I currently have.