Speaker evaluations: they’re important, please turn them in!

17 April 2011

Please, oh please, take the time to fill out the evaluations – whether you loved or hated the talk, or even left early. The data’s good to have and the only other thing we can really go off of to understand a session is the tweets that come during and after the talk.

MIX 11 was a great success in my mind, as always it’s just a lot of fun to actually interact with everyone in the ecosystem, gather feedback, and sometimes even get praise for what we’re doing. We live in the Redmond echo chamber sometimes and it’s good to get out and see the “real world” (even if it is Las Vegas).

The kind of data we get

Once all the evaluations are together, the stellar event management staff compiles the results and we get a big Excel sheet with the data. I didn’t think it would be appropriate to post data from others, so here’s a look at just my entry.

The speaker metric is up to 4 (4 rocks), and the relevant material metric goes to 5.

The fields are:

  • Date and Time,
  • Session Code,
  • Speaker,
  • Title,
  • Overall Satisfaction with the speaker’s presentation (to 4),
  • Usefulness of the information presented (to 5),
  • People in attendance,
  • Evaluations turned in,
  • % of people who turned in evaluations

You can also assume that attendance numbers drop a little on the last day (I was on day 3 of 3).

My results were in the top tier overall, an overall number of 3.89, a usefulness indicator of 5.00, attendance of 118, and turned in 9 (8%).

Oh, and the comments are always fun. In this case I’m pretty sure that one of my co-workers turned in the dreamy comment. But the others are useful. We get all verbatim feedback:

  • Best session of the conference.
  • AWESOME! This is exactly what I needed and I almost didn’t come because of the description.
  • I can’t believe I got to see THE Jeff Wilcox. He’s soooo dreamy!
  • Great session, I will use a lot of what I have learned.

How we use the feedback

It’s mostly interesting to check how speakers rank – you can assume that this plays into opportunities to give talks a few years in a row, etc. From a presentation standpoint, the other important metric is how relevant the material was to those that gave feedback.

Unfortunately the low number of forms – often hovering around 10% – doesn’t provide a ton of data. A lot like Yelp, we kind of assume we hear either about the best or the worst, but it would be very nice to have higher response rates to actually know how folks feel.

I’m sure event people also compare numbers between talk tracks, etc., but I just usually focus on my particular track. The Windows Phone track was short coded DVC.

Reading my own feedback, I should not have received a 5.00. Statistically somebody should have given a lower scale, I’m not that awesome. So this is a sign that the 8% turn-in rate is a little low; it would be better to have more data.

About paper vs electronic evals

I’ve heard that apparently paper return rate is much higher than electronic, but wow, 8% just hurts. When I attended a few European conferences last year (GOTO – formerly JAOO, and also Øredev), they also had paper evaluations – but much more simple (you didn’t even need a pen or pencil to complete). I loved the format.

When you exited the room of a talk, there would be three bins of paper: red, yellow, and green. You just take one and place it into the big box with the attendant – or if you want, you can write comments on the page as well.

I’m going to guess that they easily get more than a 50% response rate in terms of the basic “was it an OK talk or not?” feedback. Would love to see that format at MIX.

Please fill out those evaluations next time!

Jeff Wilcox is a Software Engineer at Microsoft in the Open Source Programs Office (OSPO), helping Microsoft engineers use, contribute to and release open source at scale.

comments powered by Disqus