How Pilots Work

When a new learning technology enters a formal pilot phase at UBC, it is released and supported in specific courses for a trial run, typically lasting one to two terms. After this trial run completes, the LT Hub gathers feedback from users to understand how the pilot went. Details of this evaluation process are provided below.


Evaluation Process

A formal pilot typically includes a limited number of courses and three groups of users: instructors, teaching assistants (TAs), and students. For each course, the evaluation team looks at how these groups perceived the technology and its usefulness for teaching and learning in their particular context.

Most often, feedback is gathered in the following ways:

Instructors:

  • Provide details on their course context and intended use case(s) before or soon after the start of the pilot.
  • Participate in a reflective interview, after having sufficient experience with the technology in their course(s).

TAs:

  • Participate in a reflective interview or focus group session, after having sufficient experience with the technology.

Students:

  • Participate in a class-wide survey, after having sufficient experience with the technology.
  • When relevant, optionally participate in a focus group session to give more detailed feedback.

Instructor time commitments

In supporting the feedback-gathering of a formal pilot, the instructor will be asked to help the evaluation team in five small ways:

  1. Provide details of their context(s) and use case(s) for the technology.
  2. Distribute the evaluation student survey (online or paper-based) in each course at an appropriate time.
  3. Actively encourage student participation in sharing feedback, including if relevant any focus group sessions.
  4. Connect any TAs for the course(s) with the evaluation team.
  5. Set aside time for a reflective interview near the conclusion of the pilot.

Instructors are not expected to perform any data analysis or write up any findings or reports.

Secondary use of data

Instructional teams can use the raw data collected from evaluation student surveys and focus groups in their course(s). However, LT Hub will not normally seek BREB (behavioural research ethics) approval, so anyone wishing to apply the data to wider research goals or papers may need to consider a BREB application, ideally prior to data collection.

Feel free to contact Adriana Briseno-Garzon at CTLT for additional information and advice.


Final Outcome

All information that the LT Hub gathers for a formal pilot is summarized by the evaluation team in a report. This report is shared with learning technology governance groups and forms one piece of a larger assessment they undertake in deciding on central support and funding for the piloted technology.

Pilot reports do not make recommendations regarding the adoption of the technology at UBC nor make claims regarding its effectiveness. The scope of the report is limited to summarizing how people perceived the technology during the specific pilot—focusing on its strengths, weaknesses, and the best ways to implement it at UBC, if approved.

Once governance groups review the report and decide on an outcome for the technology, the decision is reported through the learning technology status page on the LT Hub website.