Skip to main content
  • Research article
  • Open access
  • Published:

Applying real-time Delphi methods: development of a pain management survey in emergency nursing

Abstract

The modified Delphi technique is widely used to develop consensus on group opinion within health services research. However, digital platforms are offering researchers the capacity to undertake a real-time Delphi, which provides novel opportunities to enhance the process. The aim of this case study is to discuss and reflect on the use of a real-time Delphi method for researchers in emergency nursing and cognate areas of practice. A real-time Delphi method was used to develop a national survey examining knowledge, perceptions and factors influencing pain assessment and management practices among Australian emergency nurses. While designing and completing this real-time Delphi study, a number of areas, emerged that demanded careful consideration and provide guidance to future researchers.

Peer Review reports

Background

The Delphi technique is an established and effective research method with multifaceted applications for health services research. The Delphi technique is uniquely designed to explore health issues and topics where minimal information or agreement currently exists, a relatively common situation within nursing practice. Secondly, the Delphi technique allows for the introduction and integration of viewpoints, opinions, and insights from a wide array of expert stakeholders. With increasing accessibility to the Internet and proliferation of smart device technology, changes from paper-based surveys to the development of online software systems, such as the real-time Delphi method, has significantly extended the potential research for the research population and sample, and efficiency of data collection and analysis. However, a recent systematic review highlighted a gap between available methodological guidance and publishing primary research in conducting real-time Delphi studies [1, 2].

In this paper, we seek to examine the methodological gap in applying real-time Delphi methods, by providing a specific case example from a real-time Delphi study conducted to develop a self-reporting survey tool to explore pain management practices of Australian emergency nursing in critically ill adult patients [3]. Insight into the procedural challenges and enablers encountered in conducting a real-time Delphi study are provided. Importantly, key characteristics of the method are presented, followed by the case-based exemplars to illustrate important methodological considerations. Reflections from the case are then presented, along with recommendations for future researchers considering the use of a real-time Delphi technique approach.

Overview of the Delphi Technique

The Delphi technique was developed in the late 1950s’ by the Research and Development (RAND) Corporation [4] as a method for enabling a group of individuals to collectively address a complex problem, through a structured group communication process without bringing participants together physically [5]. Delphi has value in the healthcare sector, as it is characterised by multi-disciplinary teams and hierarchical structures [6]. The Delphi technique has since become popular with nursing researchers exploring a wide range of topics including role delineation [7,8,9], priorities for nursing research [10,11,12], standards of practice [13, 14] and instrument development [15, 16].

The four main characteristics of the classic Delphi method are anonymity, iteration, controlled feedback and statistical aggregation of group responses [17]. Data collection within the classic Delphi typically includes at least two [18] or three [19] rounds of questionnaires facilitated by a moderator. Round one represents what Ziglio [20] termed the ‘exploration phase’, in which the topic is fully explored using broad open-ended questions. Each following round then becomes part of an ‘evaluation phase’, where results of the previous round, interspersed with controlled feedback from a moderator, are used to frame another set of questions. Each round provides an opportunity for expert panel members to respond to and revise their answer in view of the previous responses from other panel members [21]. Since its introduction, over 20 variations of the classic Delphi method have evolved, with researchers modifying the approach to suit their needs. Most common Delphi versions include modified, decision, policy, internet, and more recently real-time Delphi, and have empaneled varying numbers of experts ranging from 6 to 1,142 [22, 23] (Table 1).

Table 1 Common types of Delphi and their key differences

The ubiquitous and interactive capacity of the Internet and smart device technology offers benefits that are intimately linked with contemporary research innovations in healthcare [24]. Two clear limitations of the classic Delphi technique were prolonged study durations and high panel member attrition [25]. Aiming to overcome these issues, Gordon and Pease [26] developed the concept of an information technology-enabled contemporaneous extension called real-time Delphi, to improve speed of the data collection process and syntheses of opinions. Conducting a real-time Delphi relies on specially designed software to administer the survey; the functionality or capabilities of which can negatively impact on the success of a study. Initial thoughts of using technology to facilitate the Delphi process emerged as early as 1975 [27]. The first specifically designed real-time Delphi software was developed in 1998 called Professional Delphi Scan [28], with the first real-time Delphi surveys performed and published in the early 2000 s [29]. Since then, several real-time software-based tools have been developed, often by researchers for the purposes of their study [30,31,32]. However, these have not been evaluated in detail in the literature.

In a real-time Delphi process, participants are provided with access to an online questionnaire portal for a specific amount of time. On accessing the portal, expert panel members see all their responses to items and the ongoing, hence real-time, anonymised responses from other panel members. The core innovation of real-time Delphi studies is the simultaneous calculation and feedback. Unlike the classic method, in a real-time Delphi participants do not judge at discrete intervals (i.e. rounds), but can change their opinion as often as they like within the set timeframe [33] (Fig. 1).

Fig. 1
figure 1

Real-time Delphi processes

Method

A real-time Delphi case exemplar

A real-time Delphi study was conducted to develop a context specific instrument (i.e. survey) to investigate emergency nurses’ practices in managing acute pain in critically ill adult patients. The following steps were followed in designing and conducting our real-time Delphi study: study design, pilot testing, recruiting experts, retention, data analysis and reporting. Findings from this study are reported elsewhere [34]. The real-time Delphi method was selected to: maximise participation from expert panel members geographically separated, minimise the amount of time demanded of experts, enable equal flow of information to and from all members, real-time presentation of results to enable experts to reassess and adjust their opinion, and allow panel members a greater degree of expression [35, 36]. Prior to commencing the study, a comprehensive literature review was conducted by the research team to generate initial survey items, and used the following questions:

  • What indicators would signify that acute pain in the critically ill adult patient has or has not been adequately detected?

  • What indicators would imply that acute pain in the critically ill adult patient has or has not been adequately managed?

  • What indicators would suggest that acute pain in the critically ill adult patient has or has not been communicated adequately?

A total of 74 items were initially generated from the literature, and organised into six domains: clinical environment, clinical governance, practice, knowledge, beliefs and values, and perception. Next, commercially available real-time Delphi survey systems were evaluated for their suitability. This process was guided by reviewing the literature [33], trialing available platforms and examining fee structures. Following this review, Surveylet (Calibrum Inc., Utah) was selected [37]. Survey items were then uploaded into Surveylet software system. Pilot testing was then conducted by the research team to evaluate software settings, automation, flow and ease of navigation. Average time to complete the survey was 38 min (SD 8 min).

An expert panel size of 12 to 15 was selected. Identification and selection of experts occurred in three stages: defining the relevant expertise, identifying individuals with desired knowledge and experience, and retaining panel members. First, a pro forma listing the type of skills, experience, qualifications, relevant professional memberships and academic outputs (e.g. peer-reviewed publications) as traits of a desired expert panel member. Second, the research team added potential experts to the list: names of academics were identified via a review of the pertinent literature, with emergency nursing clinicians identified from contacting the College of Emergency Nursing Australasia. Third, initial contacts were approached and provided with a brief overview of the study; pertinent biographical information was then obtained. In addition, they were invited to nominate other experts to be approached for inclusion. Contacts were then independently ranked, with the top 15 experts invited to participate. Twelve accepted the invitation to participate: eight emergency nurses, most nurse consultants (n = 6), two pain management nurse consultants, and two emergency nursing academics from across Australia with an average of 18 years clinical experience. All experts held postgraduate qualifications and half had published in emergency nursing practice and/or pain management.

In Delphi studies, an a priori level of consensus and stability sought for the items the experts will rate is set by the research team. In this study, consensus was achieved if ≥ 83 % (10 out of 12 panel members) of experts ranked the item ≥ 7 on the 9-point Likert scale. Secondary measures of consensus among experts included stability of response, evaluated using coefficient of quartile variation (< 5 %) and interclass correlations (≥ 0.75) [38, 39]. Items were retained if primary and secondary measures were met. Data were analysed using median, range and interquartile range. Descriptive statistics were then developed in tabular form and scatterplots.

The Delphi panel members were introduced to each survey domain as they navigated through the real-time Delphi software system, including descriptions of the domain and response format to rate each item. For ease of navigation, one item was presented per page, which included a real-time statistical summary and anonymised remarks from other experts (Fig. 2) [37].

Fig. 2
figure 2

Example of review screen

Experts were asked to rate the importance of each question using a 9-point Likert scale (1, extremely unimportant to 9, extremely important), and whether the question could be modified to improve its relevance (Yes/No). If modifications were suggested, respondents were able to provide an example of how the proposed question could be revised, which could then be subsequently voted on by the expert panel.

Participation was asynchronous with experts able to independently re-visit the real-time Delphi survey portal and modify their responses at any point in time between 1st February and March 14th 2019 (a total of 35 days). On accessing the survey portal, panel members can engage in the consensus process from the outset by viewing other panel members’ have responded. Panel members could view not only their own quantitative responses but also the median, range and interquartile range of all given quantitative responses. In the same way, panel members could also view all qualitative arguments submitted by panel members including their own. Panel members could then review or change any or all their responses, or add new arguments, up until the survey closed. Prior to launching the Delphi study, the research team piloted accessing the survey portal, data collection and analysis methods.

Results

All panel members participated in the survey, providing on average four responses per survey item. Further, of the 74 items initially proposed, 58 (78.4 %) reached consensus in the first week of the study commencing. Following feedback from the expert panel, of the initial 74 items proposed, 12 (16.2 %) were modified to improve clarity, and a further 17 items were added by the expert panel to improve survey depth. At the conclusion of the real-time Delphi, the final survey contained 91 items.

While completing the real-time Delphi, several areas were identified as needing consideration when using this technique: software selection, rating scale, piloting, recruiting experts, consensus and stability, retention and reporting. Key issues are discussed in the following section.

Discussion

Software and survey design

This case study has highlighted that to conduct a real-time Delphi requires specialised software. A recent review [33] independently evaluated the characteristics of four commercially available real-time Delphi software solutions (Risk Assessment and Horizon Scanning, eDelfoi, Global Futures Intelligence System and Surveylet) for their range of features and available question formats; data analytics; user friendliness; and, intuitive system operation (Table 2). Surveylet (Calibrum Inc., Utah) [37] was rated the highest for its flexibility, breadth of inbuilt data management options, anonymity of participants and security. While the Surveylet system can amply conduct a real-time Delphi, the system is sophisticated and requires additional time and guidance to configure correctly. Training is provided by way of video tutorials, and support options are available to assist in survey setup at an additional cost. We recommend that prior to conducting a real-time Delphi, researchers comprehensively review, and where possible, trial available software solutions.

Table 2 Comparison of Real-time Delphi software system limitations

Advantages of conducting an online Delphi study include: reduced data entry errors due to automated entry, fewer instances of panelists missing questions resulting in incomplete data, length of time decreases for data collection, and automated aggregation of results and feedback to panelists [40]. The principle difference between a conventional online Delphi and real-time Delphi software systems is the immediate calculation and provision of group responses, which can assist in generating time-sensitive guidance. While there are advantages (Table 3), there are also challenges, which are principally associated with software complexity [35] and cost [41].

Table 3 Strengths and challenges of the real-time Delphi

Internet accessibility, system navigation difficulties and the inconvenience of entering data into a computer-based data screen are recognised as challenges [44]. While the internet is a tool for extending the potential research population and sample, navigating an unfamiliar virtual landscape may frustrate panel members and therefore limit the number of completed surveys [45]. To minimise potential software complexity issues in our study, panel members were sent detailed written instructions on how to access and navigate the real-time Delphi software system, and could attend a one-to-one videoconference with a member of the research team to assist in using the platform [17, 46]. Cost-efficiency is often stated as a key benefit of using online survey tools to conduct an electronic survey [41]. However, in our review of commercially available real-time Delphi software systems, we found that it can become expensive with system providers potentially charging per survey, the number of system administrators or participants enrolled, the duration of the survey, and/or system support to aid survey customisation. While further evidence is needed to substantiate the claim concerning the efficiency of the real-time Delphi method compared to multi-round Delphi designs, current multi-round Delphi studies investigating topics relating to emergency nursing practice, have taken 60 [43] to 273 [13] days to complete.

Rating scale

Currently, there is no agreement about what rating scale size should be used in Delphi studies; despite being a common reason cited for study failure [47,48,49]. Rating scales used in previous Delphi studies exploring aspects of emergency nursing practice have ranged from 4 to 11 [1]. While 5 and 7-point scales are the most common forms of Likert scales used in surveys [50, 51], 9-point Likert scales are frequently used in Delphi studies, particularly during the consensus process [47, 49, 52]. A wide range of Likert rating scale sizes can be set within Surveylet. In addition, to underline their rating, experts can also detail their reasoning behind their selection, which can be viewed by other panel members. According to Best [53], accuracy can be improved if experts are provided with both quantitative and qualitative arguments. In a real-time Delphi experts are able to immediately react on each other’s responses, increasing the degree of information experts can interact with, which may aid in recapturing their own point of view [26].

Piloting

Despite the administrative complexity of conducting any Delphi method, there is limited discussion on pilot testing in the literature. Pilot testing can be conducted to test and adjust the Delphi survey to improve comprehension [54]. When using online software, such as when conducting a real-time Delphi, to conduct and collect multiple responses, the potential impact on cost, time, participant motivation and data integrity should an error occur, could jeopardise the overall study. Pilot testing is therefore vital to identify potential technical or system configuration errors, data collection irregularities (i.e. logic settings) and strengthen participant orientation, prior to commencing the study [55]. Prior to initiating the real-time Delphi, we first verified system configuration and all settings (e.g. timeframe, communication templates), panel member contact details, and that survey items were uploaded correctly. Second, members of the research team independently piloted the survey as mock participants and system administrators, to evaluate the survey flow and ease of navigation. Average time to complete the survey was 38 min (SD 8 min).

Recruiting experts

The formulation of an expert panel and its makeup is of critical importance for all Delphi studies, yet raises methodological concerns that can negatively impact on the quality of the results [36, 56, 57]. Despite criticism in the literature about Delphi as a methodological approach [2, 17, 35, 36, 40, 58], there remains little agreement as to what defines an expert [36]. Keeney et al. [59] in their review identified several definitions of ‘expert’ ranging from someone who has knowledge about a specific topic, recognised as a specialist in the field, to an informed individual. A recent systematic review of the Delphi method in emergency nursing [1] found similar emphasis in the criteria commonly used to identify experts: length of clinical experience, professional role (e.g. educator, clinical nurse consultant), professional college membership, peer-reviewed publications and postgraduate qualifications. From the current literature, it suggests that defining who is an expert may not be about the role they occupy, but what attributes they possess: knowledge and experience [36, 59,60,61].

Recruiting experts in our study required: defining the relevant expertise, identifying individuals with desired knowledge and experience, and retaining panel members. Melynk et al. [62] suggests that a minimum threshold for participation as an expert on a Delphi panel should include those measurable characteristics that each participant group would acknowledge as those defining expertise, appropriate to the context, scope and aims of the particular study. While selection of panel experts in Delphi studies typically involves non-probability sampling techniques, which potentially reduces representativeness [17, 57], the aim of our study was to recruit academic and emergency nurses with knowledge and clinical experience in the phenomena being explored – pain management practices for adult critically ill patients [55]. To achieve this, the procedure detailed by Delbecq et al. [63] was followed.

Expert panel size

Presently there is no agreement in the literature concerning expert panel size [2]. A recent review of 22 Delphi studies within emergency nursing reported a wide range of panel sizes - from fewer than 12 up to 315. Duffield [64] suggests that when a Delphi panel is homogenous 10 to 15 people are adequate. In a similar Delphi study seeking to develop a self-completed survey to examine triage practice, 12 experts were recruited [16]. As noted earlier, the target panel size in our study was 12 to 15, however, as Hartman and Baldwin [65] highlight, due the higher degree of automation of real-time Delphi software systems, typically web-based, the number of experts over a large geographic area participating in a real-time study can be increased.

Retention

Keeping participants fully engaged once recruited is challenging [40, 57]. High attrition rates can negatively impact on the clarity and validity of results (i.e. item consensus and selection) [56]. Conducting a classic multi-round Delphi study can be a slower process with respect to receiving and analysing feedback, generating the next survey round and determining consensus, and potentially increases the risk of attrition [66]. A potential benefit of the real-time Delphi is its expediency [67]. The much shorter timeframe between panel members submitting their response and getting insights into others’ responses, encourages stronger cognitive examination with the respective issue in question; maximising the validity of results [65]. Presently there is no formal guidance within the literature as to what constitutes an appropriate timeframe with regards to the real-time Delphi method. However, consideration should be given to the overall consensus process timeframe, to ensure panel members have sufficient time to explore opinions to minimise the potential risk of acquiescence bias. To detect potential acquiescence bias, dispersion measures such as range and coefficient of quartile variation were used.

As noted by Zipfinger [42], asynchronous participation can also aid in retaining panel members. Panel members are able to access the Delphi portal at any time, 24-hours a day within the set timeframe, making it more convenient to participate and review feedback. Further, panelists can contribute to whatever aspects in the survey they want, especially when having gone through each question at least once [68].

To maintain panel member engagement, we employed a variety of methods, beginning with participant information sheets. Information sheets were designed based on recommendations from the literature [58, 69], to ensure straightforward messaging on the importance and appeal of the study, aims, processes, timeframe, and benefits, all in clearly marked subsections. To further encourage potential experts who may have had little experience in participating in a real-time Delphi study, we detailed how participants would be introduced to the study, the Delphi methodology, availability of one-on-one training sessions in the use of real-time Delphi software system, and access to technical support. Once the study commenced, the real-time Delphi software sent personalised reminder emails at weekly intervals to encourage participants to (re)assess items in a timely fashion, and provided a summary of responses received to date. These emails emphasised that their views mattered and that for the results to be meaningful, it was important to complete the Delphi process. Sending reminder emails once the Delphi has commenced, can potentially increase retention and response activity of experts [58, 70]. However, while a recent study examining the experiences of Delphi participants concluded that receiving reminders to participate where not viewed negatively, it did not explore frequency [71]. At the completion of the real-time Delphi, panel members were sent a certificate thanking them for their commitment to the study [58], and to provide evidence for their professional development records [72]. Within our study, the level of response activity appears to suggest retention and engagement strategies were effective.

Consensus and stability

Quantifying the degree of consensus among experts is an important element of Delphi data analysis and interpretation, however reaching a pre-calculated threshold value (e.g. greater than 80 %) of consensus is not the general aim, and rarely is it 100 % [73, 74]. Consensus can either be used to determine if agreement exists or as a stopping guideline, and is measured at the conclusion of a preset number of rounds [75]. Further, as previous studies have demonstrated [49, 76], results can be greatly impacted by the level of consensus set and rating scale used [48, 77]. Within the emergency nursing literature, consensus thresholds have ranged from 50 % [78] to 90 % [79]. Our study used the most common consensus level from previous Delphi studies that had identified survey items as being essential when rated by at least 80 % of the experts [1].

Stability of consensus is also important, which is best evaluated using measures of dispersion [54, 67]. Assessing stability can occur between consecutive rounds, such as in the classic Delphi, or at the conclusion end of the consensus process. While the use of mean, standard deviation and parametric statistics to describe ordinal data is not strictly incorrect when the data is not irregular [80, 81], the use of median, range and interquartile range based on Likert-type scales is favoured as they are more robust to being sensitive to outliers [57]. In classic Delphi, stability is judged between rounds. In real-time Delphi, stability of response is evaluated at the end of the study. In our study, a coefficient of quartile variation (CQV) value less than 5 % was set a priori [82], and configured in Surveylet as a measure of relative dispersion based on interquartile range. It is also a measure of homogeneity (i.e. internal consistency) appropriate for small sample (i.e. panel) sizes of 15 or less [83], expressed as:

$$ CQV=\left(\frac{Q_3-{Q}_1}{Q_1-{Q}_3}\right)\kern0.5em \times \kern0.5em 100 $$

In addition, interclass correlations were calculated to inferentially determine stability (≥ 0.75) of responses [38, 39]. Descriptive statistics were then developed in tabular form and scatterplots. Survey items that met the above consensus and stability criteria were incorporated into the final survey (Table 4).

Table 4 Example of consensus and convergence amongst experts

Reporting

Regardless of what Delphi study design and approach is adopted, attention to rigour of reporting throughout the process is a vital aspect of research. Trustworthiness of the Delphi technique has been debated in the general and nursing literature. Keeney et al. [36] and Powell [84] suggest that the Delphi technique should not be judged by psychometric criteria used for more positivist approaches, with several criteria proposed to evaluate trustworthiness of qualitative studies [55, 85,86,87,88]. A common purpose among criteria is to support trustworthiness by reporting the process of study design and data analysis accurately.

In our study, we elected to apply the criteria proposed by Lincoln and Guba [86], based on four concepts; credibility, transferability, dependability and confirmability. Our real time Delphi was based on consensus amongst experienced individuals familiar with the phenomena being explored, across emergency nursing, pain management and academia (credibility and confirmability). Decisions on development of survey questions was arrived at through a documented and auditable; a processes supported by the Surveylet software system (credibility and dependability). The anonymous and continuous process of real-time Delphi research fostered honesty and verification of panelist responses, as panelists could provide feedback and ‘member-checking’ without fear of reprisal from their colleagues (credibility) [36]. Prior to initiating the real-time Delphi process, we piloted the survey for its structure, flow, ease of navigation and robustness (transferability).

Conclusions

Many papers describe the use of the classic Delphi approach in health services research, yet few provide practical advice on the type and process for undertaken such a research design using the real-time Delphi method. This article presented a case exemplar of a real-time Delphi study and the development of a survey to explore emergency nursing practice. The real-time Delphi method can be of great use in a wide range of time-sensitive health research issues where divergent opinion or little agreement exists. Our experiences have highlighted important strengths and challenges in its deployment, including several methodological issues which may provide guidance to other researchers.

Availability of data and materials

Not applicable.

Abbreviations

CQV:

Coefficient of Quartile Variation.

RAND:

Research and Development Corporation.

SD:

Standard Deviation.

References

  1. Varndell W, et al. Use of the Delphi method to generate guidance in emergency nursing practice: A systematic review.International Emergency Nursing, Volume 56, 2021: p. 100867.

  2. McPherson S, Reese C, Wendler MC. Methodology Update: Delphi Studies. Nurs Res. 2018;67(5):404–10.

    Article  PubMed  Google Scholar 

  3. Varndell, W, Fry, M, Elliott, D. Pain assessment and interventions by nurses in the emergency department: A national survey.J Clin Nurs. 2020;29:2352–62. https://0-doi-org.brum.beds.ac.uk/10.1111/jocn.15247.

  4. Dalkey N, Helmer O. An experimental application of the Delphi method to the use of experts. Manage Sci. 1963;9(3):458–67.

    Article  Google Scholar 

  5. Gordon T, Helmer O. Report on a long-range forecasting study. 1964 May 20th 2020]; Available from: https://www.rand.org/content/dam/rand/pubs/papers/2005/P2982.pdf.

  6. Beech R. Go the extra mile — use the Delphi Technique. J Nurs Manag. 1999;7(5):281–8.

    Article  CAS  PubMed  Google Scholar 

  7. Roberts-Davis M, Read S. Clinical role clarification: using the Delphi method to establish similarities and differences between nurse practitioners and clinical nurse specialists. J Clin Nurs. 2001;10(1):33–43.

    Article  CAS  PubMed  Google Scholar 

  8. Duffield C. The Delphi technique. Aust J Adv Nurs. 1988;6(2):41–5.

    CAS  PubMed  Google Scholar 

  9. White K, Wilkes L. Describing the role of the breast nurse in Australia. European Journal of Oncology Nursing. 1998;2(2):89–98.

    Article  Google Scholar 

  10. Annells M, et al. A Delphi study of district nursing research priorities in Australia. Appl Nurs Res. 2005;18(1):36–43.

    Article  PubMed  Google Scholar 

  11. Bayley EW, et al. ENA’s Delphi study on national research priorities for emergency nurses in the United States. J Emerg Nurs. 2004;30(1):12–21.

    Article  PubMed  Google Scholar 

  12. Considine J, et al. Consensus-based clinical research priorities for emergency nursing in Australia. Australasian Emergency Care. 2018;21(2):43–50.

    Article  PubMed  Google Scholar 

  13. ENA NP Validation Work Team. et al., Nurse Practitioner Delphi Study: competencies for practice in emergency care. J Emerg Nurs. 2010;36(5):439–49.

    Article  Google Scholar 

  14. Okuwa M, et al. Measuring the pressure applied to the skin surrounding pressure ulcers while patients are nursed in the 30 degree position. J Tissue Viability. 2005;15(1):3–8.

    Article  PubMed  Google Scholar 

  15. Wilkes L, et al. Development of a violence tool in the emergency hospital setting. Nurse Researcher. 2010;17(4):70–82.

    Article  PubMed  Google Scholar 

  16. Fry M, Burr G. Using the Delphi technique to design a self-reporting triage survey tool. Accid Emerg Nurs. 2001;9(4):235–41.

    Article  CAS  PubMed  Google Scholar 

  17. Rowe G, Wright G. The Delphi technique: Past, present, and future prospects — Introduction to the special issue. Technol Forecast Soc Chang. 2011;78(9):1487–90.

    Article  Google Scholar 

  18. Miranda FBG, Mazzo A. and G. Alves Pereira-Junior, Construction and validation of competency frameworks for thetraining of nurses in emergencies. 26: Revista Latino-Americana de Enfermagem; 2018.

  19. Murphy JP, et al. Emergency department registered nurses’ disaster medicine competencies. An exploratory study utilizing a modified Delphi technique. Int Emerg Nurs. 2019;43:84–91.

    Article  PubMed  Google Scholar 

  20. Ziglio E. The Delphi Method and its contribution to decision-making, in Gazing into the Oracle: The Delphi Method and itsApplication to Social Policy and Public Health., M. Adler and E. Ziglio, Editors. 1996, Jessica Kingsley Publisher: Bristol, PA. p. 3–33.

  21. Mullen PM. Delphi: myths and reality. J Health Organ Manag. 2003;17(1):37–52.

    Article  PubMed  Google Scholar 

  22. Strasser S, London L, Kortenbout E. Developing a competence framework and evaluation tool for primary care nursing in South Africa. Educ Health (Abingdon). 2005;18(2):133–44.

    Article  Google Scholar 

  23. Barnette JJ. Delphi methodology: An empirical investigation. Educational Research Quarterly; 1978.

  24. Cole ZD, Donohoe HM, Stellefson ML. Internet-based Delphi research: case based discussion. Environmental management. 2013;51(3):511–23.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Bardecki MJ. Participants’ response to the Delphi method: An attitudinal perspective. Technol Forecast Soc Chang. 1984;25(3):281–92.

    Article  Google Scholar 

  26. Gordon T, Pease A. RT Delphi: An efficient, “round-less” almost real time Delphi method. Technol Forecast Soc Chang. 2006;73(4):321–33.

    Article  Google Scholar 

  27. Linstone HA, Turoff M, editors. The Delphi method: techniques and applications / edited by Harold A. Linstone and MurrayTuroff ; with a foreword by Olaf Helmer. Advanced Book Program: Reading, Mass: Addison-Wesley Pub. Co.; 1975.

  28. Kuusi O, Hiltunen E. Signification Process of Future Sign. Editor: Finland Futures Research Centre - Turku School ofEconomics; 2007. Turku.

  29. Glenn J, Gordon T, editors. Futures Research Methodology. Version 3.0 ed. The Millennium Project; 2009.

  30. Gary JE, von der Gracht HA. The future of foresight professionals: Results from a global Delphi study. Futures. 2015;71:132–45.

    Article  Google Scholar 

  31. Keller J, von der Gracht HA. The influence of information and communication technology (ICT) on future foresight processes — Results from a Delphi survey. Technol Forecast Soc Chang. 2014;85:81–92.

    Article  Google Scholar 

  32. Markmann C, Darkow I-L, von der Gracht H. A Delphi-based risk analysis — Identifying and assessing future challenges for supply chain security in a multi-stakeholder environment. Technol Forecast Soc Chang. 2013;80(9):1815–33.

    Article  Google Scholar 

  33. Aengenheyster S, et al. Real-Time Delphi in practice — A comparative analysis of existing software-based tools. Technol Forecast Soc Chang. 2017;118:15–27.

    Article  Google Scholar 

  34. Varndell W, Fry M, Elliott D. Pain Assessment and Interventions by Nurses in the Emergency Department: A National Survey. J Clin Nurs. 2020;29(13–14):2352–62.

    Article  PubMed  Google Scholar 

  35. Avaella J. Delphi panels: research design, procedures, advantages and challenges. International Journal of Doctoral Studies. 2016;11:305–21.

    Article  Google Scholar 

  36. Keeney S, Hasson F, McKenna H. The Delphi technique in nursing and health research. Ames: IO: Wiley-Blackwell; 2011.

  37. Calibrum. 2020: St George, UT.

  38. Trevelyan E, Robinson N. Delphi methodology in health research: how to do it? European Journal of Integrative Medicine. 2015;7(4):423–8.

    Article  Google Scholar 

  39. Koo T, Li M. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal ofChiropractic Medicine, 2016. 15.

  40. Khodyakov D, et al., Practical Considerations in Using Online Modified-Delphi Approaches to Engage Patients and OtherStakeholders in Clinical Practice Guideline Development. The Patient - Patient-Centered Outcomes Research. 2020;13(1):11–21.

  41. Wright KB. Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services. Journal of Computer-Mediated Communication. 2005;10(3):00–0.

    Article  CAS  Google Scholar 

  42. Zipfinger S. Computer-Aided Delphi: An Experimental Study of Comparing Round-Based with Real-Time Implementation of the Method. Linz: Trauner Verlag; 2007.

    Google Scholar 

  43. de Lemos J, Tweeddale M, Chittock D. Measuring quality of sedation in adult mechanical ventilated critically ill patients: the Vancouver Interaction and Calmness Scale. J Clin Epidemiol. 2000;53:908–19.

    Article  PubMed  Google Scholar 

  44. Donohoe HM, Needham RD. Moving best practice forward: Delphi characteristics, advantages, potential problems, and solutions. International Journal of Tourism Research. 2009;11(5):415–37.

    Article  Google Scholar 

  45. Hall DA, et al. Recruiting and retaining participants in e-Delphi surveys for core outcome set development: Evaluating the COMiT’ID study. PloS one. 2018;13(7):e0201378–8.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  46. Donohoe H, Stellefson M, Tennant B. Advantages and limitations of the e-Delphi technique: implications for health education researchers. American Journal of Health Education. 2012;43:38–46.

    Article  Google Scholar 

  47. Diamond IR, et al. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67(4):401–9.

    Article  PubMed  Google Scholar 

  48. Lange T, et al. Comparison of different rating scales for the use in Delphi studies: different scales lead to different consensus and show different test-retest reliability. BMC Med Res Methodol. 2020;20(1):28.

    Article  PubMed  PubMed Central  Google Scholar 

  49. De Meyer D, et al. Delphi procedure in core outcome set development: rating scale and consensus criteria determined outcome selection. J Clin Epidemiol. 2019;111:23–31.

    Article  PubMed  Google Scholar 

  50. Weijters B, Cabooter E, Schillewaert N. The effect of rating scale format on response styles: The number of response categories and response category labels. Int J Res Mark. 2010;27(3):236–47.

    Article  Google Scholar 

  51. Revilla MA, Saris WE, Krosnick JA. Choosing the Number of Categories in Agree–Disagree Scales. Sociological Methods &Research. 2014;43(1):73-97. https://0-doi-org.brum.beds.ac.uk/10.1177/0049124113509605.

  52. Williamson PR, et al. The COMET Handbook: version 1.0. Trials. 2017;18(Suppl 3):280.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Roger JB. An Experiment in Delphi Estimation in Marketing Decision Making. J Mark Res. 1974;11(4):448–52.

    Article  Google Scholar 

  54. Clibbens N, Walters S, Baird W. Delphi research: Issues raised by a pilot study. Nurse Res. 2012;19:37–44.

    Article  PubMed  Google Scholar 

  55. Polit D, Beck C. Essentials of nursing research: appraising evidence for nursing practice. 9th ed. Philadelphia: Lippincott WIlliams & Wilkins; 2017.

    Google Scholar 

  56. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.

    CAS  PubMed  Google Scholar 

  57. Hsu C-C, Sandford B. The Delphi Technique: Making Sense of Consensus. Practical Assessment Research Evaluation. 2007;12(10):1–8.

    Google Scholar 

  58. Hall DA, et al. Recruiting and retaining participants in e-Delphi surveys for core outcome set development: Evaluating the COMiT’ID study. PLOS ONE. 2018;13(7):e0201378.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  59. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195–200.

    Article  CAS  PubMed  Google Scholar 

  60. Cantrill J, Sibbald B, Buetow S. The Delphi and nominal group techniques in health services research. Int J Pharm Pract. 1996;4(2):67–74.

    Article  Google Scholar 

  61. Kennedy HP. Enhancing Delphi research: methods and results. J Adv Nurs. 2004;45(5):504–11.

    Article  PubMed  Google Scholar 

  62. Melnyk SA, et al. Mapping the future of supply chain management: a Delphi study. Int J Prod Res. 2009;47(16):4629–53.

    Article  Google Scholar 

  63. Delbecq AL, Ven AHVd, Glenview DHG. Group Technique for Program Planning: A Guide to Nominal Group and DelphiProcesses. Group & Organization Studies. 1976;1(2):256–6.

  64. Duffield C. The Delphi Technique. Australian Journal of Advanced Nursing. 1989;6(2):41–5.

    Google Scholar 

  65. Hartman FT, Baldwin A. Using Technology to Improve Delphi Method. Journal of Computing in Civil Engineering. 1995;9(4):244–9.

    Article  Google Scholar 

  66. Schmalz U, Spinler S, Ringbeck J. Lessons Learned from a Two-Round Delphi-based Scenario Study. MethodsX. 2021;8:101179.

    Article  PubMed  Google Scholar 

  67. Gnatzy T, et al. Validating an innovative real-time Delphi approach - A methodological comparison between real-time and conventional Delphi studies. Technol Forecast Soc Chang. 2011;78(9):1681–94.

    Article  Google Scholar 

  68. Turoff M, Hiltz S. Computer based Delphi processes, Gazing into the Oracle: The Delphi Method and its Application toSocial Policy and Public Health, M. Adler and E. Ziglio, Editors. 1995, Jessica Kingsley Publishers: London. pp. 56–88.

  69. Ennis L, Wykes T. Sense and readability: participant information sheets for research studies. The British journal of psychiatry: the journal of mental science. 2016;208(2):189–94.

    Article  Google Scholar 

  70. Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S. Methods to increaseresponse to postal and electronic questionnaires. Cochrane Database of Systematic Reviews. 2009;Issue 3:MR000008.

  71. Turnbull AE, et al. A survey of Delphi panelists after core outcome set development revealed positive feedback and methods to facilitate panel member participation. J Clin Epidemiol. 2018;102:99–106.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Nursing and Midwifery Board of Australia. Guidelines for continuing professional development. NMBA: Melbourne,Victoria; 2016.

  73. Linstone HA, Turoff M. Delphi: A brief look backward and forward. Technol Forecast Soc Chang. 2011;78(9):1712–9.

    Article  Google Scholar 

  74. Warth J, von der Gracht HA, Darkow I-L. A dissent-based approach for multi-stakeholder scenario development — The future of electric drive vehicles. Technol Forecast Soc Chang. 2013;80(4):566–83.

    Article  Google Scholar 

  75. Keeney S, Hasson F, McKenna H. Consulting the oracle: ten lessons from using the Delphi technique in nursing research. J Adv Nurs. 2006;53(2):205–12.

    Article  PubMed  Google Scholar 

  76. Naylor CD, et al. Placing patients in the queue for coronary revascularization: evidence for practice variations from an expert panel process. Am J Public Health. 1990;80(10):1246–52.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  77. Linstone H, Turoff M. The Delphi Method: Techniques and Applications. Reading: MA: Addison-Wesley; 1975.

  78. Lee FH, et al. Clinical competencies of emergency nurses toward violence against women: a delphi study. J Contin Educ Nurs. 2015;46(6):272–8.

    Article  PubMed  Google Scholar 

  79. Holanda FL, Marra CC, Cunha I. Assessment of professional competence of nurses in emergencies: created and validated instrument. Rev Bras Enferm. 2018;71(4):1865–74.

    Article  PubMed  Google Scholar 

  80. Argyrous G. Statistics for research with a guide to SPSS. 2nd ed. London: SAGE; 2005.

    Google Scholar 

  81. Winter JD, Dodou D. Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon. Practical Assessment,Research and Evaluation. 2010;15:1-16.

  82. Machin D, Campbell MJ, Walters S. Medical statistics: a textbook for health sciences. 4th ed. West Sussex: Wiley; 2007.

    Google Scholar 

  83. Altunkaynak B, Gamgam H. Bootstrap confidence intervals for the coefficient of quartile variation. Communications in Statistics - Simulation Computation. 2019;48(7):2138–46.

    Article  Google Scholar 

  84. Powell C. The Delphi technique: myths and realities. J Adv Nurs. 2003;41(4):376–82.

    Article  PubMed  Google Scholar 

  85. Emden C, Sandelowski M. The good, the bad and the relative, Part Two: Goodness and the criterion problem in qualitative research. Int J Nurs Pract. 1999;5(1):2–7.

    Article  CAS  PubMed  Google Scholar 

  86. Lincoln Y, Guba E. Naturalistic inquiry. London: Sage; 1985.

    Book  Google Scholar 

  87. Neuendorf K. The Content Analysis Guidebook. 2nd ed. London: SAGE Publications; 2016.

    Google Scholar 

  88. Schreier M. Qualitative content analysis in practice. Thousand Oaks: SAGE Publications Inc; 2012.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This publication was not funded or commissioned.

Author information

Authors and Affiliations

Authors

Contributions

WV: Conceptualisation, Writing – original draft, Writing – review and editing. MF: Conceptualisation, Writing – review and editing. DE: Conceptualisation, Writing – review and editing. All authors read and approved the manuscript.

Corresponding author

Correspondence to Wayne Varndell.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the South Eastern Sydney Human Research and Ethics Committee (17/162). Before commencing the study, informed written consent forms were obtained from all participants.

Competing interests

All authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Varndell, W., Fry, M. & Elliott, D. Applying real-time Delphi methods: development of a pain management survey in emergency nursing. BMC Nurs 20, 149 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-021-00661-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-021-00661-9

Keywords