Created by CE 4/3/20, thanks to Doris Coveny for the final question to trigger this.
This arose out of writing an FAQ answer about age ranges. That reminded me that there are some quite complex issues if you have clients/participants who might be under 18 or over 65. This document looks first at the question of older adults and then young people and about even younger people, but then comes back to encourage you to think about this in the context of what you want to get from your data.
The first paper about using the CORE-OM with older adults was (Barkham et al., 2005) and reported on a sample aged 65 to 97 and found, perhaps predictably, that any issues about acceptability were to do with energy and concentration resources rather than age. The total scores showed acceptable internal reliability (.83 for the non-clinical group and .90 for the clinical, both statistically significantly, though not very importantly, lower than most reported values for younger adults). They found good clinical vs. non-clinical mean separation but noted:
However, the norms for the clinical sample were consistently lower than the equivalent clinical norms for a working-age sample. These findings suggest that the collection and compilation of age-specific norms is crucial in ensuring that appropriately referenced norms are used rather than assuming that norms are generalizable across the whole adult life-span.(Barkham et al., 2005, abstract, p. 235)
Younger people and the YP-CORE
The two existing papers about the YP-CORE from the CORE system team (Twigg et al., 2009 and Twigg et al., 2016) both refer to a design range from age 11 to 16 but we know that it is widely used beyond 16 particularly as services typically see young people up to age 18, particularly “Child and Adolescent Mental Health Services (CAMHS)” in the UK NHS. We also know that many services for “young adults” outside the NHS in the UK, and in other countries often have an upper age limit of 25. Some of those use the CORE-10 through their entire age range. We also know that some services and practitioners use the YP-CORE below the age of 11.
The second paper (available on request) showed clear differences both in mean scores and clinical/non-clinical cutting points, but also in internal reliability by gender and age group (11-13 versus 14-16). The cutting points were summarised as:
For clinical change, scores must cross the following YP-CORE cut-off points: 10.3 (male, 11–13 years), 14.1 (male, 14–16 years), 14.4 (female, 11–13 years) and 15.9 (female, 14–16 years)(Twigg et al., 2016, abstract, p. 115)
Even younger people
This should probably be an FAQ as well as a section here as I’ve had a run of questions about measures for children under the age of 11. That’s not my expert research area and I did very little clinical work with children and really only in the context of family/systemic therapies but I’m very wary of any standardised self-report measures for children under 11. It’s not that I think there aren’t children in that age range who can use self-report measures like the YP-CORE, it’s just that I think that age range has some children who are just as capable as any 11 to 13 year old of using such a measure to reflect their own state over the last week. I’m quite sure there are such children and that the way in which they read and answer the measure is probably not different from that of 11 to 13 year olds and I think their scores will reflect their state over the last week as well as for the somewhat older children. It’s just that I think that below 11 there will be other children (and a few in the 11 to 13 age range too probably) who will experience the questions rather differently from some of their peers and from how most older children will reflect on their own states.
I think this issue of how younger people handle self-report measures is probably affected by family background, education, facility with language, expectations in their peer group and probably other issues. I don’t know if there is good work, qualitative and quantitative, exploring this, but I think there should be!
What do you want from your data?
It’s very easy for people, researchers, practitioners and researcher-practitioners, to get rather preoccupied with what can be measured (however accurately or roughly) and forget that what matters with data is not so much the data points but what you want to learn from them. Perhaps this challenge gets stronger the more clearly a variable can be fairly precisely determined and generally age is such a variable.
What really matters is what you want from the data and sometimes remembering this can simplify things. The (Barkham et al., 2005) paper showed very clearly that older adults appear to need different cutting points for the CORE-OM and have rather different internal reliability values than younger adults. (Twigg et al., 2016) showed equally clearly (and with confidence intervals!) that adolescents in different age groups (12-13 and 12-16) need different cutting points and that the cutting points also differ by gender.
None of this should surprise us: age really does change the challenges in older adulthood and rapid change in adolescence does so too, and across much shorter time periods and in ways that can impact rather differently by gender. Use the most recent cutting points that look pertinent to your own clients (or research participants) and if your setting is, say, not the UK, or your clients/participants are completing the measure in a different language or within a particular cultural subgroup, please don’t assume that those UK cutting points (which I will try to update here if new data changes them) will apply usefully for you.
In general, for any aggregated data, it’s better to use the original scores rather than to dichotomise them and, particularly if you are studying change, analysing scores will be far more informative than analysing cutting point classifications and perhaps there’s a general warning here that we probably overvalue these cutting points in our field.
Of course, if considering change for any one client there has always been a clinical tendency to think in dichotomies: did this work help? Is the client better? Are they out of trouble now? Jacobson and colleagues invented the “Clinically Significant Cutting point” (CSC) to address this and there is much that is sensible in that and their CSC criterion, actually their CSC criterion, version c, that needs both referential clinical and non-clinical data has always been the preferred CST cutting point (and Evans et al., (1998) has been one of our more cited collaborative papers). However, as we note in that paper, Jacobson et al. also invented the “Reliable Change Index” (RCI) and this criterion is strictly not a fixed criterion but defines the limits of change that would be expected for should be computed for your own local data (https://www.psyctc.org/stats/rcsc1.htm is a simple tool to help you do this). Using both the CSC and the RCI allows you to look at change from one point to another for each client and categorise it as crossing the clinical/non-cutting point or not, and as showing change, improvement or deterioration, unlikely to have happened by chance measurement inaccuracy alone.
What if you are looking at change in adolescence over substantial periods of time such that some of your clients/participants will cross from 16 to 17, i.e. crossing from the age focus in the design of the YP-CORE to that more for the CORE-OM and its short forms? Well it probably depends on what you want: if you want to track change in your clients/participants then it’s probably better that they stick with the YP-CORE if they started with that. If you want strict comparability with referential data then you should probably change the measure from the YP-CORE to the CORE-OM or a short form, at … hm, either 17 or 18, it’s up to you! Facing this issue for the ITAMITED project we chose the age at which to use the CORE-OM rather than the YP-CORE for a new client to be 18 but if I client had his/her 18th birthday during an episode of care the YP-CORE would continue to be offered until the episode ended. Then, should the client start a new episode of care, the CORE-OM would replace the YP-CORE at baseline (and the CORE-SF/A and CORE-SF/B alternate every three weeks except at three monthly full CORE-OM updates … yes, it’s an intensive study!) For the switch from the CHEAT to the EAT-26 we chose the same principle but with the switch age of 14 in line with the design of the CHEAT. These decisions meant we had comparability of scores across time within episode for every client within each of their episodes but we had age appropriate comparability with referential data except for the small number of clients who passed the switch age during an episode.
I’ve a horrible feeling that this is one of my explanations of something that has tried to address the complexities but in the process, made it rather a tough read. Please contact me with reactions, advice etc.!
Barkham, M., Culverwell, A., Spindler, K., & Twigg, E. (2005). The CORE-OM in an older adult population: Psychometric status, acceptability, and feasibility. Aging and Mental Health, 9(3), 235–245. Scopus. https://doi.org/10.1080/13607860500090052
Evans, C., Margison, F., & Barkham, M. (1998). The contribution of reliable and clinically significant change methods to evidence-based mental health. Evidence Based Mental Health, 1, 70–72.
Twigg, E., Barkham, M., Bewick, B. M., Mulhern, B., Connell, J., & Cooper, M. (2009). The Young Person’s CORE: Development of a brief outcome measure for young people. Counselling and Psychotherapy Research: Linking Research with Practice, 9(3), 160–168. https://doi.org/10.1080/14733140902979722
Twigg, E., Cooper, M., Evans, C., Freire, E. S., Mellor-Clark, J., McInnes, B., & Barkham, M. (2016). Acceptability, reliability, referential distributions, and sensitivity to change of the YP-CORE outcome measure: Replication and refinement. Child and Adolescent Mental Health, 21(2), 115–123. https://doi.org/10.1111/camh.12128