McGill Alert / Alerte de McGill

Updated: Thu, 07/11/2024 - 14:03

McGill Alert. The downtown campus is partially open on Thursday, July 11. See the Campus Safety site for more information.

Alerte de McGill. Le campus du centre-ville est ouvert partiellement aujourd’hui, le jeudi 11 juillet. Complément d’information.

Why and how to do a post-launch evaluation

The biggest reason to do a post-launch evaluation is that it lets you confirm that project goals are met.
Image by Photo by Iván Díaz on Unsplash.

Why do a post-launch evaluation?

The biggest reason to do a post-launch evaluation is that it lets you confirm that project goals are met. 

We almost always conduct tests prior to launching a big project. These tests allow us to find major usability issues and often inform key decisions about the design and change management strategy of our website or application. Pre-launch tests have some limitations, though:

  • They're usually limited to structured exercises with users you directly recruit, meaning fewer data sources and a smaller number of users overall.
  • They usually use a staged concept or visibly present researchers, which can create the possibility for bias.

Because of these limitations, it's important to confirm usability with real users under natural conditions. That's why you should do a post-launch evaluation! The results of your post-launch evaluation will allow you to:

  • Take steps to address usability or service delivery issues before project is officially over;
  • Carry proven strategies forward into future projects;
  • Create a baseline for usability, which can be used as a reference in the future; and
  • Congratulate the project team and document the value of their contributions.

The post-launch evaluation can also be helpful during the project. Often, teams start to feel anxious as the project approaches launch. (Did we make the right choices? Did we miss something? What about this amazing new idea - should we jump on it before it's too late?)

In most cases, chasing these questions in the weeks before launch will do more harm than good. If the project has been managed in a thorough, data-driven way, everything in the project is there for a reason. Getting distracted by hypotheticals and alternative options can demoralize a team and delay the delivery of the good work they've done.

This is where the post-launch evaluation comes in: it gives you and your team a place to record those concerns and ideas so that they can be evaluated after launch. This helps everyone focus on the crucial pre-launch work and avoids hasty last-minute changes. 

How to do a post-launch evaluation

Start early

Begin planning your post-launch evaluation during the period when you're preparing for launch. At this stage:

  • You have a near-complete version of the website or application.

  • You've given an end-to-end demo to key stakeholders and backstage teams.

  • You're able to create drafts or outlines of key service management resources like training guides and knowledge base documents.

  • You've collected the internal and end-user feedback needed to refine and launch the project. 

In other words: if you're done collecting pre-launch feedback, you're ready to plan the post-launch evaluation. 

Use a structured template in an accessible location

We use a basic table to capture four things: 

  • The question or concern, 
  • The method(s) we'll use to evaluate it,
  • The results of that evaluation (when available), and
  • The action we'll take

You can create this table at any time, but make sure it's in place and ready when the team is preparing to launch the project.

Be sure to put the table in a document that's easy for all team members to find and contribute to. This will vary based on the project, but most teams create a shared documentation space in Confluence or SharePoint. Keep the post-launch evaluation document alongside other reference material for the project, instead of in a personal folder or workspace used by your department. 

Start filling in 

Once you've passed all the in-project usability evaluations you can accommodate, keep an ear out for statements like:

  • "I wonder if users will..."
  • "Now that we're building this, I wonder if ____ would be better."
  • "The project sponsor is worried that ____."

These are questions that can really distract a team - especially if you're trying to get a thoughtfully designed, well-executed project out the door!

Whenever you hear statements like this, your first step should be to see if you can glean insights from existing data. Most projects have multiple research exercises which have already answered key questions or concerns. It may be that your existing data can shed light on the question. 

If you don't have the data and it's too late to collect it before launch, add the idea to the post-launch evaluation table. Take the time to think of at least one method of evaluating it. Invite your team to review the table and add questions or concerns independently, too. 

Plan your evaluation activities

As your project approaches launch day, you can plan the evaluation activities mentioned your table. Usually, you'll have a few different evaluation methods to draw on after a project launches.

Most projects will include:

  • Review and analyze analytics data to monitor navigation patterns and look for dead ends or red flags. Usually, analytics is done early because it can add more questions or areas of inquiry to the evaluation plan. It's also the most common regular evaluation teams do to make sure their site or application is healthy in the months and years after launch.
  • Collect and analyze heatmaps and/or interaction videos from live users for a defined period, which will allow you to "spy" on real users and see how they typically interact with an interface or workflow. This can also be done soon after launch.
  • Surveys with real users and/or support staff who interact with real users, which will help you understand how these audiences perceive their experience.*
  • Interviews with representative users and/or support staff to dig deeper into qualitative aspects or key questions.*

As you validate your analyses (in the case of analytics, heatmaps, and interaction videos) or plan your questions (in the case of surveys and interviews), make sure to cover for all the questions or concerns your team brought up. 

However, "coverage" doesn't have to be exhaustive. For some questions or concerns, you may not need a direct evaluation action or interview question. This is common for questions like, "are users bothered by ____?" or "do users notice ___?"

In these cases, it can be enough to observe users behaving naturally with the website or application. If they're conducting an action smoothly, don't seem bothered, or if they don't mention noticing something, you can consider the question answered! Asking users what they're seeing and thinking, and observing omissions (rather than directly asking about them about individual features) can help keep your sessions to a manageable length and maintain a more natural flow during interview exercises. Reserve your detailed questions for high-stakes areas of the application or areas directly related to users' success. 

* Note that inquiries with support staff can give you qualitative data about pain points, but cannot accurately assess pain or satisfaction across all users. Geographic, economic, and cultural factors can influence the likelihood of someone requesting support. For this reason, it's important to complement support data with other inquiries based on representative samples. 

Need help setting up your evaluation activities? Our UX at McGill course is a great place to start! You can also get personalized recommendations through our consultation service

Summarize findings and key actions

Once you've completed your evaluation activities, you should have raw data that addresses all your questions and concerns.

You have two options for documenting that data. You can create an analysis that follows the questions or script of your main usability exercise (and ties analytics, heatmaps, and other analyses in), or you can collate data on different topics into your original table. 

Here's a sample of what the table approach might look like at the end of your evaluation:

Question or concern

Evaluation method(s)

Result

Action

Are users (across all applicant pools) generally successful at avoiding or correcting errors on the application form?

User interviews/live testing

Completion statistics

User interaction videos (if available)

Applicants generally found form questions to be clear and aligned with their expectations. They used words like "straightforward" and "standard." In general they successfully answered questions without generating errors. 

In live testing, we deliberately planted errors in the form and asked subjects to fix them from the "application review" page. All participants understood this page and were able to fix errors from here. 

  • "This is nice and convenient."
  • 20% observed that this is better than "other universities" which tell you about errors but don't help you find and fix them.

One observed that it would be nice to jump directly to the field or make the field stand out.

None needed: Errors seem easy to avoid, and the process to correct them seems highly usable.

We could improve usability by making it even easier for applicants to find the error once they're on the right page, but the application makes this difficult to implement. It may not be worth the effort, since applicants are very successful already.

Are applicants actually reading the information we provide about programs?

User interviews/live testing

Analytics

Heatmaps

Applicants do read the "program information" content when it's displayed on the program selection step. They do not read it at the "program explorer" step. At the program information step:

70% commented on the deadline and display of programs offered under more generic degrees (such as B.A. or B.Com.)

  • This content was read and appreciated - applicants liked being able to find their major in the list. 
  • Most participants also read the deadline.
  • Of those who commented on this section, 70% understood and/or described it as "clear."

Discontinue the program information on the "program explorer" step. Applicants do not expect or read it here, so there's no point in entering and maintaining this information. 

Keep the program information on the "program selection" step. Applicants appreciate seeing the programs offered and the deadlines. Be cautious about changing this section, since satisfaction is high. 

If you take this approach, you should also link to the individual evaluations (like user interviews or analytics) to help your team explore the findings in more detail. 

Regardless of how you document the results, make a note of ambiguous responses or possible alternative options to discuss with your team. You don't need to have all the answers! 

Concluding your post-launch evaluation

Once you've collected and organized your data, schedule a conversation with the project team. This allows you to validate the research against the team's expertise and fill in knowledge gaps that will clarify how you interpret the data. It also gives the team a chance to explore the results and identify which corrective actions are most feasible for the project.

Don't make this meeting into an email! Helping the project team understand and act on the results is more important than anything else.

Need help getting started?

We're here to help! Request a consultation to get personalized recommendations and resources for your next project - from start to finish and beyond. We'll help you make sure your project (and post-launch evaluation) are efficient, successful, and data-driven. 

Back to top