Introduction
Cognition is a broad term that refers to the mechanisms by which we acquire, process, store and ultimately use information from the environment [1].
It encompasses processes such as perception, learning, memory, and reasoning [1].
The CLOSER British birth cohorts contain a wealth of information on cognition over the life course, and the cognitive measures available in these studies have been used to answer research questions in many different fields, e.g. education [2, 3], public health [4, 5], economics [6], psychiatry [7], psychology [8-10], and political science [11].
However, these cognitive tests vary considerably both within and across the cohorts, and this has hindered studies of developmental trends and cross-cohort differences. Moreover, there is considerable heterogeneity in the quality and quantity of the documentation used to describe these cognitive assessments, and, to date, there has been no attempt to develop a uniform description of the key features of these instruments.
Therefore, as a first step in facilitating developmental and cross-cohort studies, we provide a comprehensive description of the cognitive measures that are available in five British birth cohorts.
A companion report (which will be available on this website will assess the feasibility of harmonising the cognitive measures both within and across the cohorts.
Cohorts included
This guide documents the cognitive measures that have been administered in the following studies:
More details on each of the cohorts, including cohort profiles and guidance on accessing the data, can be found at CLOSER explore the studies page.
Broad-stratum abilities as defined in the CHC model of intelligence [27]
Notation | Ability | Description |
---|---|---|
Gf | Fluid reasoning/ fluid intelligence | Ability to solve ‘novel’ problems without relying on previously acquired knowledge. |
Gsm | Short-term memory | Ability to store and manipulate information in one’s immediate awareness. |
Glr | Long-term storage & retrieval | Ability to store information in memory and recall this information over periods of time ranging from minutes to years. The main distinction between this and Gsm is that, in Gsm tests, there is a continuous effort to maintain awareness of the information, whereas in Glr tests the info has been placed out of conscious awareness for a specified period of time, and must be ‘retrieved’. |
Gs | Processing speed | Degree to which cognitive tasks can be performed quickly and without error. |
Gt | Reaction time | Speed and accuracy with which decisions/judgements can be made when presented with information. |
Gps | Psychomotor speed | Speed and fluidity with which body movements can be made. |
Gc | Acquired knowledge/ crystallised intelligence | Skill/knowledge base acquired, e.g. knowledge of the fundamental meaning of words. Highly dependent on culture. |
Gkn | Domain-specific knowledge | Mastery of specialised knowledge, e.g. foreign language proficiency, geographical knowledge. |
Grw | Reading and writing | Skills related to written language, e.g. reading speed, spelling ability. |
Gq | Quantitative knowledge | Knowledge/achievement related to mathematics. |
Gv | Visual processing | Ability to mentally simulate and manipulate imagery. |
Ga | Auditory processing | Ability to identify and process information from sound. |
Go | Olfactory abilities | Ability to detect and process information from odours. |
Gh | Tactile abilities | Ability to recognise and process information from touch. |
Gk | Kinesthetic abilities | The ability to detect and process meaningful information in proprioceptive sensations. |
Gp | Psychomotor ability | Precision, coordination and strength of body movements. |
At the highest level of the hierarchy, a general cognitive ability factor (g) is posited. Both the structure and validity of this model have been supported in many factor analytic studies [26], and general cognitive ability has been shown to be an important predictor of a wide range of life outcomes across different groups [28].
The key features of each of the cognitive measures are documented as outlined in the table below. Please be aware that over time some of the features detailed in this report may have subsequently been updated or changed.
Outline of the key features documented for each measure of cognitive ability
Domain: | First, each measure will be classified at the broadest possible level, e.g. does it assess a form of verbal, or non-verbal (i.e. performance) ability. |
Measures: | This section will list the more specific areas of cognition that are measured by each test, e.g. lexical knowledge, reading comprehension, general sequential reasoning, quantitative reasoning, short-term episodic memory, visual scanning, simple reaction time etc. This information will be taken from the original source documentation for the measure. If the source documentation is unavailable or does not contain this information, we will consult technical resources documented in the cohort literature. |
CHC: | In this section, we will document the broad-stratum ability (e.g. Gc, Gf, Gsm) associated with each test. Again, this will be determined using the source documentation. If the source documentation is unavailable/inadequate, the test/task will be matched with established broad-level cognitive abilities as described in the extant literature, e.g. [24]. For a more detailed description of the CHC model of cognitive ability, see [24, 25, 29].Not all cognitive tests fit within the CHC framework, for example developmental tests in early childhood and basic language and numeracy tests in adulthood. In such instances, no broad-stratum ability will be assigned to these tests. In addition, some tests may be associated with more than one broad-stratum. |
Administration method: | Here we will describe the key features of how the test was administered, including the test administrator (e.g. teacher, psychologist, trained interviewer) and method used (e.g. CAPI, pen and paper, oral response). This section will help highlight any mode effects to consider when tests are being compared within/across cohorts. |
Procedure: | We provide a brief description of the test itself and the administration procedure. Details (where available) include: – Nature of questions/items – Number of questions/items – Number of sub-tasks (if appropriate) – Whether practice trials were administered – Whether prompts or encouragement were used – Duration of the test |
Questionnaire: | Where possible we provide links to the original questionnaire documentation (or provide the file name), the majority of which are freely available online. |
Scoring: | In this section we provide information on the scoring of the tests (both raw scores and any standardised/normalised scores available). |
Item-level variable(s): | Here we list the relevant item-level variable names (where available). For some tests, item level variables were not available as either the test had not been processed or the data were not readily available at the UKDS (for further information please contact the relevant data providers). Note, variables could be in either upper or lower case, so please check for both. |
Total score/derived variable(s): | Here we list (where available) any derived variables (i.e. any variables that were constructed by manipulating the original raw data) and summary/total scores for the test. For some tests, total scores were not available. |
Descriptives: | Where total scores were available we provide basic descriptive statistics for the tests, including number of available cases (N), mean (M), standard deviation (SD), and range of scores. We also include histograms as a means of quickly assessing the distribution of scores, enabling researchers to identify potential issues such as floor and ceiling effects. Note that, although the descriptive statistics are accurate at the time of writing, ongoing updates and improvements to the raw data by the hosts may lead to minor discrepancies with previous/future documents. |
Age of participants: | Here we note the M, SD, and age range (in weeks, months or years, as appropriate) of participants at time of assessment (where available). |
Other sweep and/or cohort: | In the instance that the same measure has been administered in multiple waves or cohorts, this information will be recorded here. This may not mean the test is exactly the same. For example, a British Ability Scales (BAS) test previously administered, may have been subsequently revised and updated. There may also be mode effects to consider; e.g. the NSHD, NCDS and BCS all include word list learning tasks in mid-adulthood, however in NSHD the words are presented visually, whereas in NCDS and BCS they are presented aurally. In addition, we have also included references to the same tests, which have been devised by different test developers. For example, in ALSPAC the Wechsler Intelligence Scale for Children (WISC-III) was administered, and includes sub-scales such as Recall of Digits which is also available in the BAS and administered in the BCS.
Tests which cover very broad domains such as mathematics and reading which are conceptually similar but not the same test are not included in this section. For example, the mathematics tests do not cover all the same fields of mathematics i.e. arithmetic, algebra, geometry and include different questions in each of the mathematical fields. |
Source: | Here we specify the original source of each test. Typical sources include scale/test manuals, published empirical articles or descriptions of the processes used to create tests specifically for a given cohort study. |
Technical resources: | Here we provide details (where available) of useful technical resources and supplementary materials. Examples include user guides and methodological papers/materials (beyond the core source materials). |
Reference examples: | Finally, (where available) we provide examples of empirical articles that have made use of the given test (in these British birth cohorts only). This section is neither an exhaustive list, nor an endorsement of the quality of the reported research or treatment of the cognitive variables therein, rather it serves simply to provide examples of the measures in use. |