Have you ever taken time to complete one of those surveys at the end of a customer service call?
If you did, it was probably under one of two circumstances:
- You had an awesome experience and you wanted to shower that person with praise.
- Or you had such a horrible experience you wanted to give them a piece of your mind.
While it can feel nice (or at least cathartic) to give people that feedback, is this kind of surveying even effective if the results tend to be polarized? It’s hard to tell.
These emotion recognition technologies aren’t far off from being a part of events, too. And they’ll help companies like yours improve their events substantially.
The Problem With Event Feedback
Especially in the context of sales meetings and other internal events, participants aren’t going to provide perfectly balanced feedback. There’s a host of other factors in play that muddles everything up.
Perhaps these thoughts have crossed your mind when filling out a feedback survey after an event:
- “What if this isn’t really anonymous? If my boss finds out I gave her a low score, she’ll have my head.”
- “This session topic covers my department and may create more funding for us—I have to give it five stars.”
- “Poor guy. That was a miserable speech, but he looked like he was trying so hard. Three stars for effort.”
- “I wonder how my co-workers rated this session. I don’t want to lowball.”
- “What happens if I give all low scores and we don’t get a conference at all next year?”
Collectively, this sort of internal conflict around the responses creates survey sampling biases that yield ineffective data.
And that bias will exist unless you remove subjectivity from the picture, which is what we did at our recent sales conference.
How We Used Emotion Recognition Tech at One Event
The ITA Group Fall Conference is a yearly tradition. It’s a week-long sales conference event where our sales team members from around the country get together to learn and collaborate—capped by an Annual Meeting for all ITA Group team members.
This past year, on top of our typical pre- and post-event surveys, we had the opportunity to pilot a new wearable sensor technology with the capability of recognizing and recording emotion recognition feedback.
Throughout the day and during all sessions, the wristband measured biosignals. Traditional user feedback, using a one- to five-star scale, was also gathered using a mobile app at the end of each session.
At the end of the day, the user’s data was analyzed, mapped and compared to the control group of traditional surveys. What we discovered was fascinating.
The Results of Using Emotion Recognition Software
After we shut the doors and dug into the results, we gathered and shared some great insights with conference organizers and speakers:
- On the whole, manual survey rankings seldom corresponded with the engagement measured through the wristband. In fact, many were opposites, with high survey scores and low emotion recognition feedback (and vice versa). This is further evidence that biases are in play with manual surveys.
- Speakers can rebound from errors. During one session, a technology demonstration did not go exactly as planned. Levels of confusion spiked, but engagement stayed high. By sticking to the message and continuing to deliver it even when things weren’t going as planned, the speaker remained successful.
- Engagement and survey scores varied among different roles. Audiences want to learn what will help them in their job, and they aren’t as interested in everything else. Understanding what the audience truly wants to hear and making sure it is delivered in an effective manor matters. And, similarly, the people picked to participate in the trial matter as well—choosing people from exclusively one role or demographic would have skewed the results.
- Individual results are not very actionable. Some people are naturally drawn to topic A over topic B, and that’s fine. The results of one individual should not be dissected; rather, the aggregated data as a whole should be used to create a clear direction for improvement.
After looking at all the data, here are the changes we’ll be making next year based on the two kinds of feedback:
- Invest in the right event components. While sessions that registered high on either survey feedback or biometric feedback are still important, the sessions that got high scores in both kinds of feedback are a must. For us, this was an outside keynote speaker, and the data we gathered made a strong case for another outside keynote speaker next year.
- Personalized, curated content. Know the roles that are attending your event and give them the content that they need. It’s better to specifically appeal to one role than try to appeal to everyone and dilute your message.
- Keep context in mind. The message, timing, delivery and context of each session and how they align with the audience should be considered. For example, one session, scheduled right after lunch, featured the traditional static stage with speakers seated between two ferns. Emotion analysis revealed that the session received the second lowest score. Next year, we’ll change our after-lunch sessions to go beyond simple conversations, and will incorporate aspects that encourage movement, visuals, interactivity and a fast pace to keep people engaged past the post-lunch slump.
- Mentally and visually stimulating content. Sessions with a one-two punch of deep-dive research and engaging visuals were a home run, so we’ll focus more on that next year.
The Future of Events
Will the traditional post-event survey be scrapped? Not anytime soon. For now, it’s a great complement to other kinds of data.
Expect technologies such as the one we experimented with to be an integral part of event planning in years to come, and look for more excitement in the event industry: artificial intelligence, machine learning, voice activated response and location-based services.
ITA Group is always working to improve our offerings for our clients and help them prosper. And, as always, your feedback is appreciated, biometric or otherwise.