Categories
Uncategorized

Using Discovery tool data to refine the questions and scoring

Thanks to the aggregate data we are getting from our first pilot users, we have been able to compare the median scores for each of the questions asked, and look at some other stats across the different assessments.

We were pleased to see from the first data returns that ‘depth’ and ‘breadth’ questions produce the medians we would expect, with one or two exceptions. We’ve worked on these outlying questions to make it a bit easier (or in one case a bit harder) to score in the middle range. This should bring the medians more into line with each other, making it easier and more valid to look across aggregate scores and compare areas of high and low self-assessment.

Median Question Scores - All capabilities
Median scores, ‘all staff’ assessment, snapshot from early March 2018: click for detail

There will always be some natural variation in average scores, because we are asking about different areas of practice, some of which will be more quickly adopted or more generally accomplished than others.

We were particularly pleased to find on testing that there is a positive correlation between confidence and responses to other questions in the same area (i.e. expertise and range). We would expect this, but it is good to have it confirmed. However, although there was a meaningful range of responses, almost no users were rating themselves less than averagely confident, so we are looking to adjust the scoring bands to reflect this. We don’t attach a great deal of weight to this question type, precisely because it is known that users tend to over-state their confidence, but is included to encourage reflection and a sense of personal responsibility.

You will see the impact of this work when we reach the mid-April review point, along with some further changes to the content and platform indicated by our user feedback. More about this below.

Scoring is designed to deliver appropriate feedback

As you see, we’re doing what we can to ensure that the scores individuals assign themselves are meaningful, so they allow relevant feedback to be delivered. The question types available don’t allow us to match selected items with feedback items (e.g. items not chosen in the grid or ‘breadth’ questions with ‘next steps’ suggestions in the personal report). This means relying on aggregate scores for each digital capability area. The pilot process is allowing us to find out how well the scoring process delivers feedback that users feel is right for them, and how the different areas relate to one another (or don’t!). However, the questions and scoring are not designed to provide accurate data to third parties about aptitude or performance. So scoring data, even at an aggregate level, should be treated with a great deal of caution. We are issuing new guidance on interpreting data returns very shortly.

radial
The radial diagram gives a quick overview of individual scores

The aim of the Digital discovery tool is developmental, so it’s clear what progress looks like and ‘gaming’ the scores is simple. Our contextualising information is designed to remove this temptation, by showing that the discovery process is for personal development and not for external scrutiny. Our feedback from staff in particular suggests that if there is any suggestion of external performance monitoring, they won’t engage or – if required to engage – they won’t answer honestly. Which of course will mean there is no useful information for anyone!

 

The ongoing evaluation process

evalform
Showing where to find the evaluation form on the dashboard

As well as examining user data, of course, we have access to the individual evaluation forms that (some) respondents fill out on completion.This is giving us some really useful insights into what works and what doesn’t.  However, at the moment we think the sample of respondents is weighted towards people who already know quite a lot about digital capability as a concept and a project. The views of people with a lot invested are really important to us. But we also need the feedback from naive users who may have a very different experience. Please encourage as many as possible of your users to complete this step. The evaluation form is available from a link on the user dashboard (see right).

Screen Shot 2018-04-01 at 22.01.53In addition we have taken a variety of expert views, and we are just about to launch a follow-up survey for organisational leads. This will ask you about what you have found beneficial about the project, what has supported you to implement it in your organisation, what you would change, and how you would prefer Jisc to take the Discovery tool project forward. Please look out for the next blog post and launch!

Leave a Reply

Your email address will not be published. Required fields are marked *