Skip to main content

Development of the clinical reasoning competency scale for nurses

Abstract

Background

Clinical reasoning is emphasized as an important component of nursing education, since nurses’ lack of clinical reasoning leads to incorrect clinical decision-making. Therefore, a tool for measuring clinical reasoning competency needs to be developed.

Methods

This methodological study was conducted to develop the Clinical Reasoning Competency Scale (CRCS) and examine its psychometric properties. The attributes and preliminary items of the CRCS were developed based on a systematic literature review and in-depth interviews. The validity and reliability of the scale were evaluated among nurses.

Results

The exploratory factor analysis was conducted for the construct validation. The total explained variance of the CRCS was 52.62%. The CRCS consists of 8 items for plan setting, 11 items for intervention strategy regulation, and 3 items for self-instruction. The Cronbach’s α of the CRCS was 0.92. Criterion validity was verified with the Nurse Clinical Reasoning Competence (NCRC). The correlation between the total NCRC and CRCS scores was 0.78, all of which were significant correlations.

Conclusion

The CRCS is expected to provide raw scientific and empirical data for various intervention programs to develop and improve nurses’ clinical reasoning competency.

Peer Review reports

Background

Clinical reasoning is a cognitive process used to make clinical judgments; in this process, a patient’s history is investigated, a physical assessment is performed, and the results are interpreted to design a health care plan [1, 2]. Nurses acquire information to solve the patient’s problem and combine this information with their knowledge to guide decision-making in patient care [3, 4]. Clinical reasoning involves incorporating the patient’s context and the clinical situation into critical thinking [5, 6]. Metacognition, which enables students to use a multidimensional strategy to search and consider an expanded range of possibilities to solve the problem considering the context [7], is a core attribute of clinical reasoning [8, 9].

A lack of clinical reasoning competency in nurses leads to incorrect clinical decision-making, which affects patient safety [10]. In contrast, the clinical reasoning competency of nurses enhances patient recovery and improves the quality of care. For this reason, a tool for measuring clinical reasoning competency needs to be developed and used. There exist two widely used tools for assessing nurses’ clinical reasoning: the Health Science Reasoning Test (HSRT) and the Nurse Clinical Reasoning Competence (NCRC) tool. The HSRT, which was developed by Insight Assessment in the United States, contains 33 questions that are completed in 50 minutes. The NCRC was developed in Taiwan [11], and it includes 15 items derived based on the clinical reasoning model [4]. However, these tools do not have metacognition attributes that check and evaluate the cognitive process in problem-solving, which is a key element of clinical reasoning competency. Thus, these tools have limitations in evaluating the multidimensional aspects of nurses’ clinical reasoning.

Previous studies have been conducted to improve clinical reasoning competency using educational methods such as simulation education and problem-based learning [12, 13]. Those studies indirectly measured clinical reasoning competency based on critical thinking [14] and problem-solving [15], which is insufficient. In research on pedagogy, metacognition refers to the control of actions related to obtaining and using information to improve inferential problem-solving. Metacognition helps learners to monitor their reasoning processes and regularly reflect on the process of cognition. It has been reported that activating metacognition improves problem-solving [16]. Therefore, this study was conducted to develop and verify the validity and reliability of the Clinical Reasoning Competency Scale (CRCS) for nurses.

Methods

Research design

This methodological research was conducted to develop the CRCS and examine its psychometric properties. The research was performed according to the methodology suggested by DeVellis [17].

Research procedure

Permission for this study was obtained from the Institutional Review Board (IRB No. 2019-1741-001). The CRCS attributes and preliminary items were developed based on a systematic literature review and in-depth interviews. Built on these attributes, items to measure nurses’ clinical reasoning competence based on self-regulated learning theory were derived. The content validity of the CRCS was verified using the Delphi technique, and the validity and reliability of the scale were evaluated among nurses. Through this process, we aimed to build a scale with high reliability and validity that could measure nurses’ clinical reasoning competency.

Theoretical framework

Self-regulation theory has been used to explain the cultivation of clinical reasoning competency. The theoretical basis of self-regulated learning theory is formed by constructivist teaching methods, according to which learners use systematic learning and metacognitive strategies to develop a clear goal or motivation for learning [13, 14]. This theory suggests that clinical reasoning competency can be fostered by controlling one’s actions until a goal is reached [18]. The development of a measurement scale should be based on a theoretical model to help researchers understand what to measure. Otherwise, the scale’s validity may be decreased, and data may be misinterpreted [19, 20]. A structured tool applying a theory can have an affirmative effect on intervention-based studies [21]. Thus, self-regulation theory was adopted for the development of the CRCS.

Phase 1: analysis of the attributes of clinical reasoning competency

A systematic literature review was performed to identify the attributes and items of the CRCS. First, a literature search was performed for research articles on clinical reasoning measurements. The PubMed, Embase, MEDLINE Complete, Cumulative Index for Nursing Allied Health Literature (CINAHL) with full text, and RISS databases were searched. The keywords in the literature search were nur* AND clinical reasoning AND measur* and non-MeSH terms. The inclusion criteria were studies with clinical reasoning measurements and studies conducted among nurses and nursing students. The literature search was limited to the English and Korean languages. We excluded the gray literature. After searching the corresponding reference lists and abstracts, the full articles were collected and reviewed and manual searches in core journals were performed. The initial database search retrieved 442 studies, of which 326 remained after removing duplicates. After title and abstract review, 294 results were excluded, and the remaining 32 studies were assessed for eligibility. Seven studies were excluded because they were not conducted among nurses and nursing students or were not written in English or Korean. Twenty-five studies that satisfied the selection standards were identified, from which 14 studies were excluded because they did not measure clinical reasoning. Therefore, a total of 11 studies were included in the systematic review. In addition, the literature on self-regulation theory was analyzed to derive to attributes of clinical reasoning competency.

To supplement the concepts identified from the literature review and to develop items for an evaluation tool that would be suitable for the real-world circumstances faced by nurses, online in-depth interviews were conducted with 10 nurses. The CRCS is targeted at the estimate nurses’ clinical reasoning competency. Therefore, the in-depth interview was performed only by nurses. The nursing experts’ opinions were reflected in content validity. The inclusion criteria for the interviews were (1) nurses who understood the study purpose and (2) nurses who worked in hospitals with 500 beds or more. In-depth interviews were conducted with nurses with a range of experience, from less than 1 year of clinical experience to proficiency in the clinical career stage, as suggested by Benner [3]. The nurses worked in medical and surgical wards, the emergency room, and the intensive care unit. The in-depth interviews were guided using questions, such as “What is the difference in your nursing care, i.e., similarity or differences from nurses with high clinical reasoning competency?” and “Do you check the problem-solving process for a given patient and compensate for any deficiencies?” The interviews were conducted in a non-face-to-face format using Webex to protect the participants from the risk of coronavirus disease 2019 transmission. The interview data were analyzed using thematic analysis. The literature review and in-depth interview results are complementary. After each interview, the researchers recorded the critical points in writing. If there were no information to confirm, the decision was made not to perform more interviews considered as saturation.

Phase 2: development of the item pool

Based on the results of the literature review and in-depth interviews, the attributes of clinical reasoning and preliminary items were derived. The CRCS was developed in the Korean language, and an expert on Korean language and literature revised it for readability and ambiguity. A 5-point Likert scale (1 = strongly disagree; 5 = strongly agree) was used as the response measure for the preliminary items.

Phase 3: expert review with the delphi technique

To reach consensus on the derived items, the Delphi technique was used to collect expert opinions and reach an agreement. A group of 12 experts was formed that consisted of nursing professors, clinical professionals, and pedagogical experts. The pedagogical experts were involved in the psychometric evaluation of the instrument. Snowball sampling was done for the recruitment of Delphi experts. A second-round questionnaire was performed to examine the 72 preliminary items. The content validity ratio (CVR) was calculated based on the appropriateness reported by the expert panelists. The CVR is a measure of panelist agreement based on the proportion of panelists who rate an item as essential [22]. The CVR is calculated as follows:

$$CVR = [{n_e} - (N/2)]/(N/2)$$

In this equation, ne is the number of the panelists who rate an item as essential, and N is the number of total panelists. Since there were 12 Delphi survey panelists, the minimum value was 0.56 [22]. Questions with a CVR of 0.56 or less were deleted.

Phase 4: pilot study

After the completion of the preliminary CRCS, a pilot questionnaire was administered to 21 nurses to determine whether the instructions or any items were difficult to understand. The scale consisted of a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). The higher score indicates higher clinical reasoning competency.

Phase 5: construct validity

Construct validity was examined using exploratory factor analysis (EFA). The participants of this study were nurses working in hospitals with more than 500 beds. The sample size is recommended to be 5–10 times the minimum number of items for factor analysis or an analysis of the correlations between items [23]. Considering the dropout, 500 nurses were recruited to participate in this study. Seventeen nurses did not complete the survey. Therefore, 483 nurses were included in the final EFA.

Phase 6: criterion validity

Criterion validity was examined in terms of concurrent validity by comparing the CRCS with a “gold-standard” i.e., NCRC [11] clinical reasoning competency measurement [17]. The Cronbach’s α of the NCRC was 0.94, and the Korean version of the NCRC had a Cronbach’s α of 0.93 [24].

Phase 6: reliability

Internal consistency was examined using Cronbach’s α. A test-retest assessment was conducted to verify its reliability. The test-retest was performed at 2-week intervals with the same subjects participating as in the construct validity survey. And the subjects who agreed with the participation, completed the in the retest study (n = 51).

Data collection

Data collection was conducted through an online Google Survey of nurses working in hospitals with more than 500 beds. The researchers explained the study purpose, the inclusion and exclusion criteria, and the expected effects via communications distributed in the S University Hospital and nurse communities. In this study, convenience sampling was done to recruit nurses.

Statistical analysis

The collected data were analyzed using SPSS for Windows version 25.0 (IBM Corp., Armonk, NY, USA). Descriptive statistics were used to analyze the participants’ demographic characteristics. We performed validity and reliability tests, and item analysis was conducted. Construct validity was determined using EFA. Principal component analysis and the varimax rotation method were used. Criterion validity was examined using Pearson correlation coefficients. Cronbach’s α was used to test the reliability. Test-retest reliability was assessed using Pearson correlation coefficients. Two researchers performed the data analysis.

Results

Identification of clinical reasoning competency attributes

According to the literature review, the components of self-regulated learning are metacognition, behavior, and environmental regulation. Metacognition, an essential element of clinical reasoning competency, consists of the properties of planning, monitoring, regulation, and evaluation [25, 26].

Behavioral regulation means that students choose meaningful behaviors by controlling their behaviors to reach goals. They achieve their goals through time management and self-examination activities, such as seeking help from others [27, 28]. Therefore, behavior regulation involves three factors: self-instruction to check the cognitive processes for problem-solving, self-reinforcement to enhance behavior, and help-seeking to obtain advice from colleagues. Three detailed factors were derived.

Environmental regulation refers to learners’ ability to sufficiently use materials such as books and the internet for effective learning in the problem-solving process. Kuiper [18] reported that relationships with other healthcare professionals include emphasizing the relationship with patients to facilitate multidisciplinary collaboration. Therefore, in the CRCS, empathy with the patient and multidisciplinary collaboration were derived as factors of environmental regulation to measure clinical reasoning competency.

These themes were derived by analyzing and organizing the results of in-depth interviews conducted to develop items to measure clinical reasoning competency (Table 1).

Table 1 Summary of the in-depth interviews

Through the literature review and in-depth interviews, three domains and nine clinical reasoning attributes were derived. The three domains were metacognition, behavioral regulation, and environmental regulation. The nine attributes were as follows: planning, monitoring, control, evaluation, self-reinforcement, self-instruction, help-seeking, empathy with the patient, and an interdisciplinary approach.

Development of the preliminary item

In total, 72 preliminary items were developed based on the literature review and in-depth interviews. After two rounds of verification with the Delphi technique, 31 items were deleted because they had a CVR lower than 0.56 or were duplicates. The terminology was revised according to the experts’ opinions. For example, the item “analyze the reason for an error that occurred during nursing care” was changed to “analyze the cause of an error that occurred during nursing care.” The experts suggested phrasing this in terms of “cause” rather than “reason” in the questionnaire to estimate clinical reasoning competency, focusing on the process of metacognition. In total, 41 preliminary items were derived for the CRCS.

Pilot study

The preliminary CRCS was administered as a questionnaire to 21 nurses. The participants suggested including the subject pronoun “I” to clarify the meaning of item #40. This item was modified. Finally, 41 preliminary CRCS items were confirmed.

General characteristics of the participants

The validity and reliability of the CRCS were examined with 483 nurses A total of 91.5% of the participants were female, and the average age was 31.43 years old. The most common highest education level was a bachelor’s degree, the most common work department was the intensive care unit, and the most common range of clinical experience was between 5 and 10 years (Table 2).

Table 2 Characteristics of the participants (N = 483)

Validity

Item analysis was performed using the mean, standard deviation, and item-total correlation coefficient. The appropriate range for the item-total correlation is 0.30–0.70 [29]. The item-total correlations of the CRCS questions ranged from 0.440 to 0.624.

To determine the number of factors of the 41 items, principal component analysis with varimax rotation was performed. The number of factors was determined based on the eigenvalues, scree plots, and parallel analysis results. In the parallel analysis, the three eigenvalues analyzed from the data were greater than those from randomly generated data, and three factors were extracted with this method. Accordingly, EFA was performed by selecting three factors, which explained 52.62% of the total variance. When the commonality was less than 0.40 or the factor loading value of two or more factors was 0.40 or more, the items were deleted for having common loadings; ultimately, 3 factors and 22 items were extracted (Table 3). The items included in the final three factors were reviewed, and the factors were named “plan setting,” “intervention strategy regulation,” and “self-instruction.”

Table 3 Explanatory factor analysis of preliminary items (N = 483)

Criterion validity was verified with the NCRC, which was developed by Liou et al. as a clinical reasoning scale [11]. The correlation between the total NCRC and CRCS scores was 0.78, and the correlations for the three CRCS factors were 0.76 for plan setting, 0.67 for intervention strategy regulation, and 0.50 for self-instruction, all of which were significant correlations.

Reliability

The Cronbach’s α of the CRCS was 0.92, with the coefficients of the subdomains ranging from 0.73 to 0.89 (Table 4). The test-retest correlation coefficient was r = .76 (p < .001), indicating high reliability. After the completion of the above steps, the final CRCS consisted of 22 items of 3 factors. The factors of CRCS were plan setting, intervention strategy regulation, and self-instruction.

Table 4 The reliability of CRCS

Discussion

This study was conducted to develop a scale to measure nurses’ clinical reasoning competency. The validity of the scale was verified. The final CRCS was confirmed through reliability testing.

This scale was developed based on self-regulation learning theory, which suggests that nurses can cultivate clinical reasoning competency by controlling their actions until they reach their goals or foster a motivation for problem-solving. Self-regulation learning consists of metacognition, behavior regulation, and environmental regulation [18]. These three attributes proposed by self-regulation learning theory do not work independently; instead, their interaction allows problem-solving to be reliably achieved [30]. The previous tools to measure the clinical reasoning competency of nurses or health care practitioners mostly focused on knowledge to solve problems or cognitively oriented questions about making an accurate diagnosis. Thus, these tools do not include the metacognitive aspects of clinical reasoning such as reflecting upon and regulating the problem-solving process. In nurses’ clinical decision-making through reasoning, metacognition refers to the ability to set cognitive strategies, knowing which outcomes are produced. Thus, metacognition is a key element for problem-solving [31, 32]. Metacognition, which can enable nurses to make immediate judgments about their problem-solving through reflection and evaluation, is an essential predictor of clinical reasoning competency. Therefore, the CRCS is valuable because it includes the attribute of nurses’ metacognitive competency based on self-regulated learning theory. In addition, clinical reasoning competency should be measured based on decision-making or observable behavior. The CRCS, which was developed as a structured scale by applying the theory of self-regulated learning, is expected to have a positive impact on conceptual models and intervention strategies to enhance researchers’ understanding of clinical reasoning competency. In this study, based on a literature review and in-depth interviews, 72 preliminary questions were derived: 44 metacognitive questions, 22 behavioral regulation questions, and 6 environmental regulation questions. Item analysis was conducted to verify the means, standard deviations, and correlation coefficients between the questions and the entire CRCS [29].

As a parallel analysis, EFA was performed using varimax analysis, resulting in 22 items with a three-factor structure 52.62% of the total explained variance of the three factors, i.e., plan setting, intervention strategy regulation, and self-instruction. Plan setting was suggested as the first factor. This factor consists of 8 items. It was named plan setting. Most of the items are related to plans to provide nursing care for the patient. Lee et al. [12] analyzed the techniques and processes of clinical reasoning among nurse practitioners. Nurses identified patients’ health problems quickly in complex clinical practice situations and established solutions and plans for cognitive strategies. This is a fundamental step in clinical reasoning. The clinical reasoning competency of nurses is imperative for the step of setting a plan as part of the overall cognitive strategy for problem-solving. The CRCS can estimate this attribute.

The second factor, intervention strategy regulation, had the highest explained variance. The intervention strategy regulation factor was extracted by integrating evaluation and control attribution from the preliminary CRCS. Among the 11 items, six are related to control attribution. During problem-solving in a difficult situation, seeking advice from the nurse’s colleagues or modifying one’s cognitive strategies are emphasized as essential [33]. It was named factor as intervention strategy control. Therefore, the items included in intervention strategy regulation are considered to be meaningful for measuring nurses’ clinical reasoning competency.

The third factor was self-instruction. This factor includes 3 items. It was named self-instruction. This factor consists of items measuring the self-development of nurses for developing clinical reasoning competency. In self-regulated learning, Paul and Pintrich [34] reported that learners need to observe themselves to achieve their own goals. Using self-regulated learning strategies, nurses discovered problems, corrected them, and developed behavioral control [8]. The CRCS makes it possible to measure nurses’ strategies for problem-solving.

In order to identify clinical reasoning competency factors, attributes were derived through a literature review, and the CRCS of nurses was composed of three factors as a result of the Delphi technique and the tool verification process. The deleted factor as help-seeking attribution as a component of behavioral regulation was integrated with the multidisciplinary collaboration component of environmental regulation. It is considered that the concept of seeking advice from and collaborating with other colleagues overlaps with seeking help in the process of problem-solving. The monitoring attribute was incorporated into plan-setting, and assessments were integrated into interventional strategy regulation. Participants may have reviewed the appropriateness of cognitive strategies as part of the process of establishing methods for problem-solving. Furthermore, nurses reflected on errors and modified their cognitive strategies when solving patients’ health problems, which could have contributed to the integration of the factors of the evaluation item with the factors of intervention strategy regulation.

To verify criterion validity, we used the NCRC developed by Liou et al. [11] based on a clinical reasoning model [4]. The correlation coefficients between the two scales were r = 0.52–0.78, indicating that the criterion validity was satisfactory.

The reliability test of the CRCS demonstrated a Cronbach’s α = 0.92. For each factor, the Cronbach’s α was > 0.70, indicating satisfactory reliability following the criterion proposed by Nunnally and Bernstein [35]. According to this criterion, the Cronbach’s α of the CRCS was satisfactory.

The CRCS was developed to measure the clinical reasoning competency of nurses. A higher score indicates higher clinical reasoning competency. The CRCS is a standardized tool that was verified through the Delphi method and showed satisfactory construct validity, criterion validity, and reliability. Therefore, the CRCS can be used in various studies to develop and cultivate nurses’ clinical reasoning competency.

Limitations

The scale developed in this study was used to collect primary data that can improve clinical reasoning competency. Based on the research results, the limitations and suggestions of this study are as follows.

First, there may have been restrictions in the selection or comparison of nurses with clinical reasoning competency above a certain level. A cutoff value of the CRCS was not presented. We suggest supplementing the subjective judgments of respondents and determining the sensitivity and specificity of the scale to increase its usefulness in follow-up studies.

Second, the difficulty of the items was not analyzed. Therefore, further research is needed to supplement the tool through an elaboration process.

Third, since this study used convenience sampling of nurses working in general hospitals with more than 500 beds, there may have been limitations related to bias or a lack of generalizability of the data. To use the CRCS among nurses working in other environments, the validity and reliability of the scale will need to be further verified.

Conclusion

This methodological study was conducted to develop a scale to measure nurses’ clinical reasoning competency. Preliminary questions were derived by analyzing the attributes of nurse’s clinical reasoning competency with a literature review and in-depth interviews. The validity of the scale was verified using the Delphi technique. The preliminary CRCS was completed through a pilot study. The final CRCS was completed by verifying the construct validity, criterion validity, reliability, and test-retest reliability of the scale with various methods based on data from 483 nurses working in hospitals with 500 beds or more. The CRCS is a self-reported measurement tool consisting of 11 items for intervention strategy regulation, 8 items for plan setting, and 3 items for self-instruction. All items are scored on a 5-point Likert scale, with higher scores indicating higher clinical reasoning competency. The average CRCS score was 83.62 in this study. The average score for each factor was 40.69 for intervention strategy regulation, 31.67 for plan setting and 11.26 for self-instruction. The CRCS was shown to have validity and reliability in measuring nurses’ clinical reasoning competency. Therefore, it is expected that using the CRCS will provide raw scientific and empirical data for various intervention programs to develop and improve nurses’ clinical reasoning competency. In addition, the attributes of clinical reasoning competency identified in the development of this scale can be applied to develop educational programs for nurses’ clinical reasoning competency and improve their problem-solving. Ultimately, this research is expected to contribute to patient safety.

Data Availability

The authors can confirm that all relevant data are included in the article.

Abbreviations

CINAHL:

Cumulative Index for Nursing Allied Health Literature

CRCS:

Clinical Reasoning Competency Scale

CVR:

content validity ratio

EFA:

exploratory factor analysis

GFI:

goodness-of-fit index

HSRT:

Health Science Reasoning Test

NCRC:

Nurse Clinical Reasoning Competence

References

  1. Juma S, Goldszmidt M. What physicians reason about during admission case review. Adv Health Sci Educ Theory Pract. 2017;22:691–711. https://0-doi-org.brum.beds.ac.uk/10.1007/s10459-016-9701-x.

    Article  PubMed  Google Scholar 

  2. Soh M, Konopasky A, Durning SJ, Ramani D, McBee E, Ratcliffe T, et al. Sequence matters: patterns in task-based clinical reasoning. Diagnosis (Berl). 2020;7:281–9. https://0-doi-org.brum.beds.ac.uk/10.1515/dx-2019-0095.

    Article  PubMed  Google Scholar 

  3. Benner P. From novice to expert. Am J Nurs. 1982;82:402–7.

    CAS  PubMed  Google Scholar 

  4. Levett-Jones T, Hoffman K, Dempsey J, Jeong SY, Noble D, Norton CA, et al. The ‘five rights’ of clinical reasoning: an educational model to enhance nursing students’ ability to identify and manage clinically ‘at risk’ patients. Nurse Educ Today. 2010;30:515–20. https://0-doi-org.brum.beds.ac.uk/10.1016/j.nedt.2009.10.020.

    Article  PubMed  Google Scholar 

  5. American Nurses Association. Nursing: scope and standards of practice. 3rd ed. Silver Spring: American Nurses Association; 2015.

    Google Scholar 

  6. Victor-Chmil J. Critical thinking versus clinical reasoning versus clinical judgment: differential diagnosis. Nurse Educ. 2013;38:34–6. https://0-doi-org.brum.beds.ac.uk/10.1097/NNE.0b013e318276dfbe.

    Article  PubMed  Google Scholar 

  7. Schraw G, Dennison RS. Assessing metacognitive awareness. Contemp Educ Psychol. 1994;19:460–75. https://0-doi-org.brum.beds.ac.uk/10.1006/ceps.1994.1033.

    Article  Google Scholar 

  8. Kuiper RA, Pesut DJ. Promoting cognitive and metacognitive reflective reasoning skills in nursing practice: self-regulated learning theory. J Adv Nurs. 2004;45:381–91. https://0-doi-org.brum.beds.ac.uk/10.1046/j.1365-2648.2003.02921.x.

    Article  PubMed  Google Scholar 

  9. Simmons B. Clinical reasoning: concept analysis. J Adv Nurs. 2010;66:1151–8. https://0-doi-org.brum.beds.ac.uk/10.1111/j.1365-2648.2010.05262.x.

    Article  PubMed  Google Scholar 

  10. Holder AG. Clinical reasoning: a state of the science report. Int J Nurs Educ Scholarsh. 2018;15:20160024. https://0-doi-org.brum.beds.ac.uk/10.1515/ijnes-2016-0024.

    Article  Google Scholar 

  11. Liou SR, Liu HC, Tsai HM, Tsai YH, Lin YC, Chang CH, et al. The development and psychometric testing of a theory-based instrument to evaluate nurses’ perception of clinical reasoning competence. J Adv Nurs. 2016;72:707–17. https://0-doi-org.brum.beds.ac.uk/10.1111/jan.12831.

    Article  PubMed  Google Scholar 

  12. Lee J, Lee YJ, Bae J, Seo M. Registered nurses’ clinical reasoning skills and reasoning process: a think-aloud study. Nurse Educ Today. 2016;46:75–80. https://0-doi-org.brum.beds.ac.uk/10.1016/j.nedt.2016.08.017.

    Article  PubMed  Google Scholar 

  13. Hur HK, Song HY. Effects of simulation-based clinical reasoning education and evaluation of perceived education practices and simulation design characteristics by students nurses. J Korea Contents Assoc. 2015;15:206–18. https://0-doi-org.brum.beds.ac.uk/10.5392/JKCA.2015.15.03.206.

    Article  Google Scholar 

  14. Hur HK, Roh YS. Effects of a simulation based clinical reasoning practice program on clinical competence in nursing students. Korean J Adult Nurs. 2013;25:574–84. https://0-doi-org.brum.beds.ac.uk/10.7475/kjan.2013.25.5.574.

    Article  Google Scholar 

  15. Lee J, Lee Y, Lee S, Bae J. Effects of high-fidelity patient simulation led clinical reasoning course: focused on nursing core competencies, problem solving, and academic self-efficacy. Jpn J Nurs Sci. 2016;13:20–8. https://0-doi-org.brum.beds.ac.uk/10.1111/jjns.12080.

    Article  PubMed  Google Scholar 

  16. Kang M, Song YH, Park SH. Relationships among metacognition, flow, interactions and problem solving ability in web-based problem based learning. J Res Curric Instr. 2008;12:293–315. https://0-doi-org.brum.beds.ac.uk/10.24231/rici.2008.12.2.293.

    Article  Google Scholar 

  17. DeVellis RF. Scale development: theory and applications. 4th ed. Los Angeles: SAGE Publications, Inc.; 2016.

    Google Scholar 

  18. Kuiper R. The effect of prompted self-regulated learning strategies in a clinical nursing preceptorship [master’s thesis]. Columbia (SC): University of South Carolina; 1999. 179 p.

  19. Jacobson SF. Evaluating instruments for use in clinical nursing research. In: Frank-Stromborg M, Olsen SJ, editors. Instruments for clinical health-care research. Boston: Jones & Bartlett; 2004. pp. 3–19.

    Google Scholar 

  20. Tian J, Atkinson NL, Portnoy B, Lowitt NR. The development of a theory-based instrument to evaluate the effectiveness of continuing medical education. Acad Med. 2010;85:1518–25. https://0-doi-org.brum.beds.ac.uk/10.1097/ACM.0b013e3181eac3fb.

    Article  PubMed  Google Scholar 

  21. Finch TL, Mair FS, O’Donnell C, Murray E, May CR. From theory to ‘measurement’ in complex interventions: methodological lessons from the development of an e-health normalisation instrument. BMC Med Res Methodol. 2012;12:69. https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2288-12-69.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Lawshe CH. A quantitative approach to content validity.Pers Psychol.197528:563–75. https://0-doi-org.brum.beds.ac.uk/10.1111/j.1744-6570.1975.tb01393.x.

  23. Tabachnick BG, Fidell LS. Principal components and factor analysis. In: Tabachnick BG, Fidell LS, editors. Using multivariate statistics. 4th ed. Boston: Allyn & Bacon; 2001. pp. 607–75.

    Google Scholar 

  24. Joung J, Han JW. Validity and reliability of a korean version of nurse clinical reasoning competence scale. J Korea Acad Ind Coop Soc. 2017;18:304–10. https://0-doi-org.brum.beds.ac.uk/10.5762/KAIS.2017.18.4.304.

    Article  Google Scholar 

  25. Schraw G, Graham T. Helping gifted students develop metacognitive awareness. Roeper Rev. 1997;20:4–8. https://0-doi-org.brum.beds.ac.uk/10.1080/02783199709553842.

    Article  Google Scholar 

  26. Zimmerman BJ. Becoming a self-regulated learner: which are the key subprocesses? Contemp Educ Psychol. 1986;11:307–13. https://0-doi-org.brum.beds.ac.uk/10.1016/0361-476X(86)90027-5.

    Article  Google Scholar 

  27. Loomis KD. Learning styles and asynchronous learning: comparing the LASSI model to class performance. J Asynchronous Learn Netw. 2000;4:23–32.

    Google Scholar 

  28. Pintrich PR. A conceptual framework for assessing motivation and self-regulated learning in college students. Educ Psychol Rev. 2004;16:385–407. https://0-doi-org.brum.beds.ac.uk/10.1007/s10648-004-0006-x.

    Article  Google Scholar 

  29. Ferketich S. Focus on psychometrics. Aspects of item analysis. Res Nurs Health. 1991;14:165–8. https://0-doi-org.brum.beds.ac.uk/10.1002/nur.4770140211.

    Article  CAS  PubMed  Google Scholar 

  30. Zimmerman BJ. Becoming a self-regulated learner: an overview. Theory Pract. 2002;41:64–70. https://0-doi-org.brum.beds.ac.uk/10.1207/s15430421tip4102_2.

    Article  Google Scholar 

  31. Choi EJ. Relationships between metacognition, problem solving process, and debriefing experience in simulation as problem-based learning (S-PBL). J Korea Contents Assoc. 2016;16:459–69. https://0-doi-org.brum.beds.ac.uk/10.5392/JKCA.2016.16.01.459.

    Article  Google Scholar 

  32. Suh YJ, Bae JY, Lee JH. Factors related to the undergraduate nursing students’ metacognition. J Korea Converg Soc. 2019;10:523–32. https://0-doi-org.brum.beds.ac.uk/10.15207/JKCS.2019.10.11.523.

    Article  Google Scholar 

  33. Son LK, Schwartz BL. The relation between metacognitive monitoring and control. In: Perfect TJ, Schwartz BL, editors. Applied metacognition. Cambridge: Cambridge University Press; 2002. pp. 15–38. https://0-doi-org.brum.beds.ac.uk/10.1017/CBO9780511489976.003.

    Chapter  Google Scholar 

  34. Paul R, Pintrich PR. Chapter 14 - the role of goal orientation in self-regulated learning. In: Boekaerts M, Pintrich PR, Zeidner M, editors. Handbook of self-regulation. San Diego: Academic Press; 2000. pp. 451–502. https://0-doi-org.brum.beds.ac.uk/10.1016/B978-012109890-2/50043-3.

    Chapter  Google Scholar 

  35. Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by Mo-Im Kim Nursing Research Institute, Yonsei University College of Nursing and Seoul Nurses Association.

Author information

Authors and Affiliations

Authors

Contributions

JB designed this study, collected data, analysis and manuscript writing. JL designed this study, supervision and manuscript writing. MC, YJ and YL supervision and manuscript writing. CP supervised study and analyzed the data analysis. All authors read and approved the final manuscript.

Corresponding author

Correspondence to JuHee Lee.

Ethics declarations

Ethics approval and consent to participate

The study has been approved by the Human Research Ethics Committee of the Yonsei University Health System (approval number at 2019-1741-001). All nurses gave written informed consent to participate in the study. All methods were carried out in accordance with relevant guidelines and regulation.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bae, J., Lee, J., Choi, M. et al. Development of the clinical reasoning competency scale for nurses. BMC Nurs 22, 138 (2023). https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-023-01244-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-023-01244-6

Keywords