Blackboard Usability Study

Blackboard Website


Blackboard is a learning management system employed by many tertiary education providers globally. The website provides end-users with the ability to organise, store, share, and assess academic content. As this system’s user interface can vary significantly per institute, it is essential that end-users relevant to the institute in question are the main focus for its implementation. End-user experience and interaction are vital in determining the success of any product and Blackboard is no exception. By measuring the five Usability quality components, we can identify areas in need of improvement to increase its ease of use. The five areas identified are Learnability, Efficiency, Memorability, Errors, and Satisfaction.

As this study seeks to identify the issues experienced by the existing user base, Efficiency, Memorability, Errors, and Satisfaction were defined as the main categories of interest. Due to their previous exposure to Blackboard, Learnability was unable to be measured. The four identified categories are defined by Isa, Lokman, Wahid, & Sulaiman (2014) as follows.

  • Efficiency is a measure of the speed with which users can perform tasks. This category seeks to measure the ease with which end users can operate the website, and ensures the minimum amount of user input.
  • Memorability measures how easily users returning to the design can perform previously performed tasks. This seeks to assess how impactful the design has been at conveying initial knowledge, in addition to providing a natural flow through the website.
  • Errors are the measure of the frequency and severity of end-user mistakes, in addition to their ability to recover from said errors. This metric seeks to measure the intuitiveness of existing users to minimise their frustration or difficulty using the website
  • And the measure of user satisfaction with the design. This seeks to gauge how pleasant interactions with the interface are, and the general feeling users have upon viewing the motif.

Usability Testing

To measure the impact of Auckland University of Technologies Blackboard design, 12 third-year students were selected to perform a series of tasks reflecting a typical interaction with the website. The 12 participants whose demographic details were assessed pre-study, were subjected to qualitative, quantitative, attitudinal, and behavioural assessment. These assessments took the form of a task-oriented Interview, Intercept Surveys, Usability Lab Studies, and Ethnographic Field Studies.

The assessment methods selected for this study enabled the results to reflect both a direct measure of the impact Blackboard’s design had on participants, in specific areas and record observations of areas previously not identified as relevant to the study.

The task-oriented interview, Intercept surveys, and elements of the Ethnographic field studies were selected to measure the attitudes expressed by participants pre, peri, and post-study in using both a qualitative and quantitative approach. The remaining assessment techniques, Usability Lab studies and elements of the Ethnographic Field study were selected to assess the behaviour of the participants using a qualitative approach.

The Questionnaire

The pre-study questionnaire allowed us to obtain demographic data. This allowed us to compare variables such as experience, age, ethnicity, field of study, and year of study. These variables enabled us to explore ethnographic research areas during the study. The main focus of the pre-study questionnaire was to observe the correlation between demographic variables and those ethnographic and experimental data collected.

The post-study questionnaire alternated between ‘positive’ and ‘negative’ tone, where the users were able to select the response of ‘1’ where they strongly agreed with the statement proposed, or select ‘5’ where they strongly disagreed. The design of the questionnaire allowed us to both include the intensity of the response in the survey and eliminate personality bias where users could have potentially agree blindly to the questions provided.

Research Framework


Pre-study activities include all research processes that take place before the usability experiment. Before the usability session, all participants are asked questions oriented around their demographic, previous experience with the software, and how long they have been using Blackboard. These questions seek to measure how the participants feel about the software leading up to the study, their level of expertise with the software, and how their background may influence the results.


The experiment involves participants completing three tasks in a one-to-one usability session. All tasks begin from the homepage of a logged-in blackboard user. The three tasks are as follows:

  1. Navigate to any lecture slide for a designated subject.

  2. Retrieve the grade for a specific assignment.

  3. Find the date and time the requested exam will take place.
The completion time was independently recorded for each task, starting once the participant were on the home page and indicated they were ready to start. Recording was stopping once the set condition for the task was met, e.g. reading aloud the grade the for the assignment. After each task was completed, the participants were invited to discuss their experience with the session moderator. This discussion was taped, following agreement from the participants.


A post-study questionnaire of 10 questions was provided covering the features, user interface, and performance of the website after the usability experiment. The survey required users to indicate their level of agreement with different statements on a scale of 1 - 5 (1 - Strongly Agree; 2 - Agree; 3 - Neutral; 4 - Disagree; 5 - Strongly Disagree). The questionnaire incorporates sections that enable participants to provide qualitative written data in the form of comments and recommendations.

Results & Analysis


The participants were all third-year university students aged between 20 and 28. There were 11 males and one female. 11 of the students were studying towards a Bachelor of Computer and Information Science, and one student was studying towards a Bachelor of Science. Chosen majors included analytics, software development, computational intelligence and computer networking. All participants have been using the blackboard system for at least three years, and two had been using it for four and five years respectively. Majority of participants had no experience with any educational management system before using blackboard. Two participants had used the Canvas system at the University Of Auckland, and another two had used similar systems in high school. The pre-experiment impression of the Blackboard system was mostly positive, and all but one participant believed the application fulfilled its role as a tool for education management. Many participants commented that they found it very easy to use and that it was beneficial in organising their study content. Some of the negative preliminary feedback mentioned that the site had poor smartphone optimisation, can be challenging to learn and needs to be more intuitive when it came to finding exams.


Method: For each of our three tasks our participants were timed from the moment they touched the laptop until the desired outcome was achieved or the participant abandoned the task. Each task had a different desired outcome. Task one was when the user opened a lecture slide, task two was when the user read the overall grade aloud and task three was when the user read the exam time information aloud.
Task One
Task one was to open a lecture slide within the COMP719 – Applied Human Computer Interaction paper. All users were successfully able to complete this task as most of them had experience with the blackboard system, however some confusion was present due to the pages being named “course notes”, “lecture notes” & “resources” in other papers within blackboard system.

From table 1, the following statistics can be mentioned:

- Total of Time Completion: 260.51 seconds
- Average Completion Time: 21.61 seconds
Table 1: Task 1 Completion Times
Task Two
Task two was to identify the total grade for ENSE701 Contemporary Methods in Software Engineering Assignment One. If the grade was wrong we told the participant it was wrong and asked them to continue looking. All users were successfully able to complete this task, however several users were uncertain whether they had found the correct grade, as the overall grade was not clearly labelled and there were multiple navigation pathways to the grade which added to the confusion.

From table 2, the following statistics can be mentioned:

- Total of Time Completion: 422.28 seconds
- Average Completion Time: 35.19 seconds
Table 2: Task 2 Completion Times
Task Three
Task three was to identify the date and time that the exam for COMP700 Text and Vision Intelligence will take place. Most users noted that the placement of the timetable was unusual and not intuitive, and that without previous experience this would be a difficult task to complete. This was proved when participant 5 did not have any previous experience with the exam timetable and was unable to complete the task. Since participant 5 could not complete the task they were allotted the max time as a default. His result was omitted from the average completion time for consistency but was left in the total time.

From table 3, the following statistics can be mentioned:

- Total of Time Completion: 858.28 seconds
- Average Completion Time: 39.89 seconds
Table 3: Task 3 Completion Times

Finding: We ordered the tasks from top to bottom where the first task consisted of something which all students do on a daily or weekly basis (opening the lecture notes), followed by viewing grades (done monthly or more) and lastly viewing exam timetables (done end of the semester). This gradual increase allowed us to judge memorability within the users as we noticed that each task took longer than the one before, where task one averaged at 21.62 seconds, task two averaged at 35.19 seconds and task three averaged at 39.89 seconds (after taking the outliers out). This in turn proves the memorability part of our research criteria where tasks that users have done more often are completed faster. Task one and task three required the same amount of clicks to open and load the file.

User Feedback - Key Takeaways


Method: Once the usability tasks were complete, participants completed a questionnaire. Participants indicated a level of agreement with ten statements, using a 1-5 scale, and had the option to write an additional comment per statement. For analysis, the results of the questionnaire are grouped into three usability categories; features, user interface and performance.


Table 4: Summary of Participants Score On Features
Table 5: Summary of Participants Comments On Features

The first four questions of the usability study focussed on features of the website with a focus on interactions with navigation menus and links.

When looking at features of the website, participant highlighted navigation as a potnetial usability issues issue. Participants noted they need to click through too many menus, and that the pages contained information not relevant to what they are doing. They commented that navigation menus were not always clear where they would take the user. Finding exam timetables was highlighted as particularly difficult and potentially undoable without prior experience using the Blackboard system. At least two participants indicated that finding the exam time table would be impossible if they had not already how to found it previously. However, overall, our participant agreed with positives statements about the features of the website and disagreed with the negative one.

User Interface

Table 6: Summary of Participants Score On User Interface
Table 7: Summary of Participants Comments On User Interface

Questionnaire response shows most participants disagreed with the statement that the user did not like the colour palette. Comments on colour palette were generally favourable to neutral, with only one participant commenting that the colours could be more appealing. Most user’s also agreed that the website was visually appealing, comments included mentioning the colour palette being another critical feature as for why the site was attractive.

On average, there was a marginal agreement with the statement that the user interface was cluttered. This was the only negative statement in the post-questionnaire study that more people agreed with than disagreed. Comments indicated that potentially there are too many things going on per webpage, particularly the home page. It is interesting to note that the average score for users who found Blackboard cluttered (2.5) is very similar to the score of users who found Blackboard hard to navigate (2.75). There could be a possible correlation between the users who find Blackboard easy to see, also end up finding Blackboard easy to navigate. In a study done by Oxford’s Dr David R Danielson (2002) on ‘Web Navigation & Behavioural Effects’, it states that the lack of visual cues present in navigation can contribute to the website feeling “cluttered”, which ties back to our hypothesis and correlation found above.


Table 8: Summary of Participants Score On Performance
Table 9: Summary of Participants Comments On Performance

Questionnaire results covering the performance of the website scored the two highest positive responses. Participants on average felt load time were appropriate and didn’t agree that the site was unresponsive. Some comments mentioned how fast the website loaded and said it felt responsive as well. One commenter commented that they thought some part of the page could load a bit faster, but otherwise the usability feedback indicate usability issues of performance are very limited, suggesting the website may be well optimised. A contributing factor to the positive performance of the Blackboard site may be the high broadband speed available at the Auckland University of Technology, where the usability studies took place. Testing the perceived performance of the site at differing speeds of broadband would be useful, but is beyond the scope of this project.

Conclusion & Recommendations

Based on the feedback gathered and reviews given by the users, this study recommends the following changes for AUT to consider:

  • Introduce the ability for documents and powerpoints to be viewed within the browser instead of downloading. This will save space on user hardware and save outbound bandwidth on university servers.

  • Create a streamlined naming scheme for lecture notes within papers. Currently lecture notes pages within different papers are known as course notes, resources, etc. Streamlining should prevent false clicks and waste of time.

  • Provide grades under both the grade and subject page. This aims to save user time as some users are currently looking under the paper page for assignment grades.

  • Add a widget on main page with exam timetable, similar to SDW. This saves the users from having to go under exam time table, after exhausting navigation, and using control + f to locate papers.

  • Mention grades within notifications. For example “A grade submitted for Assignment one in R&D paper". Adopt minimalistic design where the interface does not appear cluttered.

  • The feedback above aims to reduce the time staff and students spend in navigating and accessing the system. Which in turn results in saved man hours and lower resource usage.