Pinterest

Approaching Evaluation

by Tobi Voigt on

My go-to book on evaluation

If I have learned anything as a museum educator, it’s that those of us who didn’t come into the field through teaching have a lot to learn.  When I got my first museum education job, I spent months reading and learning about state learning standards, social studies instruction methods, curriculum requirements, and even waded a bit into state and federal legislation.  The language of public education alone has taken me years to master: authentic assessment, scaffolding, differentiated instruction, and so on.

This research has informed my belief that museum educators cannot create K-12 programs in a vacuum.  Be honest, we have all put together what we thought was a GREAT workshop or tour, only to see it flop when teachers and schools don’t book it.

So, how do we create programs that are relevant and desired by teachers and schools?  Evaluation is key.

According to my new favorite book on evaluation, Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings, there are four types of evaluation that can help us develop great programs:

  • Front-end evaluation: This helps inform the program development process.  It can be done with surveys or interviews, but I have used it most successfully through focus groups.  I assembled a Teacher Advisory Board that meets twice a year.  At the meetings, I ask what their needs are, and we brainstorm ways that my institution can meet them.
  • Formative evaluation: This takes place while a program is being developed.  For my Building Detroit online game and resource, I put together a small team of pilot teachers.  I had them test out prototypes of the game, and review and give feedback on the curriculum units.  For field trip programs, I have invited classrooms to test them out for free, and made tweaks as necessary.
  • Remedial evaluation: This is what I consider “standard” evaluation.  Once a program is up and running, I do regular surveys to get ongoing feedback from teachers and students.  For our on-site guided tour, I created a simple SurveyMonkey survey that we send out to teachers via email within 3 days of their field trip.  Creating that survey took a lot of work.  There is a whole science behind asking the right questions to elicit helpful and relevant feedback.  I use the Practical Evaluation Guide book to help me craft my surveys.

    Screen shot of my tour survey

  • Summative evaluation: For one-time programs – including our annual special events – we have use paper surveys to learn about the audience, where they learned about the program, and what they liked or disliked.  It’s informal, but effective.  We gather the survey responses into “post-mortem” reports that we refer to when planning future programs and events.

Evaluation takes time and energy, but it is worth it.  As mission driven organizations, museums must use some or all of these methods to ensure that we remain relevant to our community and our visitors.  What evaluation methods have you found effective?

One Response to “Approaching Evaluation”

  1. July 31, 2012 at 1:49 pm, Tobi V. said:

    Someone asked me what the response rate is for the field trip surveys – the one where we email the link to a web-based survey within three days of the field trip. I thought I’d share my answer with everyone:

    Currently, the response rate is about 20%. Considering our very passive way of requesting feedback (an email to the teacher with the link after the trip), I am comfortable with that percentage. I also find that 50% of the responders also type qualitative feedback into the optional open-ended question boxes.

    To date, the number of survey responses has helped me to determine where our marketing weaknesses are for our pre- and post-visit materials. We’ve changed the way we promote them to encourage more use. It’s working!

    Reply

Leave a Reply