Figure 2
Figure 2

Blended Learning: Enhancing Knowledge Outcomes in Health Education – A Meta-Analysis

Introduction

Background

The landscape of education, particularly in the health professions, has been significantly reshaped by the advent of digital learning technologies. E-learning, leveraging the power of the internet, has emerged as a popular and potentially transformative approach in medical education [[1](#ref1),[2](#ref2)]. These innovative models break down traditional barriers of time and location, fostering enhanced collaboration, personalized learning experiences, and greater convenience for both educators and learners [[3](#ref3)–5]. However, e-learning is not without its challenges, including the substantial investment required for high-quality multimedia resources, ongoing platform maintenance costs, and the need for user training. Conversely, traditional face-to-face learning, while offering valuable in-person interaction, is constrained by the necessity for physical co-presence of students and instructors at specific times and locations [[6](#ref6)].

Blended learning, a pedagogical approach that strategically combines the strengths of both traditional face-to-face instruction and the flexibility of e-learning, has emerged as a compelling solution [[7](#ref7)]. This hybrid model, incorporating both synchronous and asynchronous e-learning components, presents a promising alternative for health education, offering a balanced approach that aims to maximize learning effectiveness while mitigating the limitations of either approach in isolation. The adoption of blended learning in academic settings has experienced rapid growth, becoming increasingly prevalent across diverse educational contexts [[8](#ref8)].

Research into the efficacy and implementation of blended learning has expanded significantly since the 1990s [[9](#ref9)–11]. Synthesizing the findings from this growing body of research is crucial for providing educators and learners with evidence-based insights into the effectiveness of blended learning strategies [[12](#ref12)]. Previous systematic reviews have suggested the potential of blended learning to enhance clinical training for medical students [[13](#ref13)] and improve undergraduate nursing education [[14](#ref14)]. Furthermore, numerous reviews have explored the broader potential of blended learning within medical education [[15](#ref15),16]. A prior meta-analysis [[12](#ref12)] indicated that blended learning outperformed non-blended learning approaches, although it also noted significant heterogeneity across studies.

However, many of these existing reviews have been limited in scope, focusing on specific areas within health education. Moreover, few have employed quantitative synthesis methods to rigorously evaluate the effectiveness of blended learning, particularly concerning knowledge outcomes. Therefore, this study aims to address this gap by quantitatively synthesizing research that evaluates the efficacy of blended learning in health education, specifically focusing on knowledge outcomes among students, postgraduate trainees, and practitioners compared to traditional learning methods.

Objective

The primary objective of this systematic review and meta-analysis is to rigorously evaluate the effectiveness of blended learning in health education on knowledge outcomes. This evaluation will be based on a comprehensive analysis of studies comparing blended learning with traditional learning, assessing knowledge gains through both subjective (e.g., learner self-reports) and objective evaluations (e.g., multiple-choice question knowledge tests). The study will encompass learners’ factual and conceptual understanding of course content across various health disciplines.

Methods

Comparison Categories and Definitions

This study compared blended learning to traditional learning in health education, both overall and stratified by the type of learning support integrated within the blended approach. The specific comparisons included: offline blended learning versus traditional learning, online blended learning versus traditional learning, digital blended learning versus traditional learning, computer-aided instruction blended learning versus traditional learning, and virtual patient blended learning versus traditional learning.

Offline learning was defined as educational activities utilizing personal computers or laptops to deliver standalone multimedia content without requiring a continuous internet or local area network connection [[17](#ref17)]. This category includes resources supplemented by videoconferences, emails, and audio-visual materials stored on portable media (CD-ROM, flash drives, external hard drives), provided that the core learning activities did not depend on real-time network connectivity [[18](#ref18)].

Online support encompassed all learning materials and activities delivered via the internet or a network, requiring ongoing connectivity for access and interaction.

Digital education was broadly defined as a range of teaching and learning strategies fundamentally reliant on electronic media and devices as tools for instruction, communication, and interaction [[19](#ref19)]. This encompasses diverse educational approaches, concepts, methodologies, and technologies that facilitate remote learning, potentially addressing shortages of health professionals in resource-limited settings by overcoming time constraints and geographical barriers to training.

Computer-assisted instruction (CAI) was defined as the use of interactive CD-ROMs, multimedia software, or audio-visual resources to enhance instruction. This includes multimedia presentations, synchronous virtual sessions delivered through web-based learning platforms, presentations incorporating audio-visual elements, and synchronous or asynchronous discussion forums designed to promote participation and engagement [[20](#ref20),21].

Virtual patients (VPs) were defined as interactive computer simulations of realistic clinical scenarios used for health professional training, education, or assessment. This broad definition encompasses a variety of systems employing diverse technologies and addressing a range of learning objectives [[22](#ref22)].

Traditional learning, within the context of this review, was defined as any non-blended learning approach. This included non-digital and non-online methods, as well as exclusively online, exclusively e-learning, or other single-support educational methods such as lectures, face-to-face sessions, reading assignments, and classroom-based group discussions.

Reporting Standards

This systematic review and meta-analysis were conducted and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [[23](#ref23)] and the Cochrane systematic review guidelines [[24](#ref24)].

Eligibility Criteria

The inclusion criteria for studies were established using the PICOS (population, intervention, comparison, outcome, and study design) framework.

Studies were included if they: (1) involved health professions learners (students, trainees, or practitioners); (2) implemented a blended learning intervention in the experimental group; (3) included a comparison of blended learning with traditional learning; (4) reported quantitative knowledge outcomes assessed through subjective or objective evaluations; and (5) were randomized controlled trials or non-randomized studies (given the prevalence of non-randomized designs in health education research). Only studies published in English were considered for inclusion.

Data Sources

To identify relevant studies, a comprehensive search of citations published in MEDLINE (PubMed) was conducted, spanning the period from January 1990 to July 2019. Key search terms encompassed delivery concepts (blended, hybrid, integrated, computer-aided, computer assisted, virtual patient, learning, training, education, instruction, teaching, course), participant characteristics (physician, medic*, nurs*, pharmac*, dent*, health*), and study design concepts (compar*, trial*, evaluat*, assess*, effect*, pretest*, pre-test, posttest*, post-test, preintervention, pre-intervention, postintervention, post-intervention). Asterisks (*) were used as truncation symbols to broaden the search. The complete search strategy is detailed in Multimedia Appendix 1.

Study Selection

Two independent reviewers (AV and ES) applied the pre-defined eligibility criteria. Initially, they screened titles and abstracts of all identified articles. Subsequently, they reviewed the full texts of abstracts deemed potentially eligible. Disagreements regarding study inclusion were resolved through discussion and consultation with a third member of the research team until a consensus was reached.

Data Extraction

AV and ES independently extracted relevant data from the included studies using a standardized data collection form. Extracted information included participant characteristics, details of the intervention and comparator, outcome measures, and key results. Any discrepancies in data extraction were resolved through discussion and, if necessary, consultation with a third research team member.

Risk of Bias Assessment

During data extraction, AV and ES independently assessed the risk of bias for each included study using the Cochrane Collaboration’s Risk of Bias tool [[25](#ref25)]. The assessment criteria included: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other potential biases, including publication bias. Publication bias was further investigated using funnel plots. For each criterion, the risk of bias was categorized as low, high, or unclear, following the Cochrane Risk of Bias tool guidelines.

Data Synthesis

Quantitative data analysis was performed using SAS software (version 9.4; SAS Institute). Knowledge outcomes were analyzed using the standardized mean difference (SMD), also known as Hedges’ g effect size, calculated from means and standard deviations reported in each study [[15](#ref15)]. In cases where the standard deviation was not reported but the mean was available, the mean standard deviation from all other studies was used as an imputation. To account for variations in scoring scales across studies, means and standard deviations were adjusted to ensure the average standard deviation could substitute for missing standard deviation values. Meta-analyses were conducted using a random-effects model (statistical significance was set at P<.05).

The I² statistic was used to quantify heterogeneity across studies [[26](#ref26)]. An I² value of 50% or greater was considered indicative of substantial heterogeneity. Given the anticipated functional differences among studies, including variations in study designs, participant populations, interventions, and educational settings, a random-effects model was chosen to accommodate greater heterogeneity. Forest plots were generated to visually represent the meta-analysis findings. Funnel plots and Begg’s tests (significance at P<.05) were employed to explore potential publication bias. Meta-regression and subgroup analyses, based on study design, were conducted to investigate potential sources of heterogeneity. Sensitivity analyses were also performed to assess the robustness of the findings.

Results

Study Selection

The initial search strategy identified 3389 articles from MEDLINE. After screening titles and abstracts, 93 articles were deemed potentially eligible for inclusion, and their full texts were retrieved for detailed assessment. Of these, 56 articles met all inclusion criteria and were included in the final meta-analysis [[9](#ref9)–11,22,2778] (Figure 1). All included articles were published in peer-reviewed journals.

Figure 1. Study Flow Diagram

Figure 1: PRISMA flow diagram illustrating the study selection process for the meta-analysis on blended learning in health education.

Type of Participants

Across the 56 included articles, a total of 9943 participants were analyzed. The majority of participants (30 of 56 subgroups) were from the field of medicine [[11](#ref11),3134,36,38,39,41,43,44,4650,53,6467,6974, 76,78,79]. Participant subgroups from other health professions included: 16 studies in nursing [[9](#ref9),10,12,27,29,35,37,40,51,52,57,58,6163,75], 1 in pharmacy [[37](#ref37)], 3 in physiotherapy [[12](#ref12),30,45], 5 in dentistry [[10](#ref10),42, 54,55,59], and 4 in interprofessional education [[56](#ref56),60,68,77].

Geographically, 47 of the 56 studies were conducted in high-income countries. Specifically, 14 were from the United States [[9](#ref9),10,29,31,36,38,43,50, 5355,59,61,73], 2 from Canada [[47](#ref47),58], 5 from Germany [[39](#ref39),41,46,57,76], 3 from the United Kingdom [[42](#ref42),56,75], 3 from Spain [[30](#ref30),45,48], 1 from France [[74](#ref74)], 1 from Greece [[34](#ref34)], 1 from Sweden [[67](#ref67)], 1 from the Netherlands [[37](#ref37)], 1 from Korea [[40](#ref40)], 1 from Poland [[79](#ref79)], 1 from Serbia [[70](#ref70)], 1 from Croatia [[64](#ref64)], 1 from Turkey [[32](#ref32)], 2 from Taiwan [[28](#ref28),51], 1 from Japan [[69](#ref69)], and 7 from Australia [[44](#ref44),49,63,65,66,68,78]. The remaining 9 studies were conducted in low- or middle-income countries: 2 from Thailand [[52](#ref52),62], 1 from China [[77](#ref77)], 1 from Malaysia [[72](#ref72)], 2 from Iran [[27](#ref27),71], 1 from Jordan [[35](#ref35)], 1 from South Africa [[11](#ref11)], and 1 from Uruguay [[60](#ref60)].

Detailed technical characteristics of the blended learning systems, educational content topics, applied design methodologies, and information regarding the validity of outcome measurements are available in Multimedia Appendix 1.

Effects of Interventions

Blended Learning Versus Traditional Learning

The overall pooled effect size demonstrated a statistically significant and large positive effect of blended learning on knowledge outcomes (standard mean difference 1.07, 95% CI 0.85 to 1.28, z=9.72, n=9943, P<.001). However, significant heterogeneity was observed across studies (I²=94.3%). Figure 2 presents the details of the main analysis. The funnel plot asymmetry test (Figure 3) indicated potential publication bias among the included studies (Begg test P=.01). After adjusting for publication bias using the trim and fill method, the effect size was reduced to 0.41 (95% CI 0.16 to 0.66, P<.001), still suggesting that blended learning is more effective than traditional learning, albeit with a smaller effect size.

Figure 2. Forest Plot: Blended Learning vs. Traditional Learning

Figure 2: Forest plot comparing blended learning to traditional learning for knowledge outcomes in health education. SMD: Standardized Mean Difference; CI: Confidence Interval; df: degrees of freedom.

Figure 3. Funnel Plot: Blended Learning vs. Traditional Learning

Figure 3: Funnel plot assessing publication bias in studies comparing blended learning versus traditional learning for knowledge outcomes.

Offline Blended Learning Versus Traditional Learning

Among the 3 studies [[27](#ref27)–29] comparing offline blended learning to traditional learning, 2 studies [[27](#ref27),28] reported significantly better knowledge outcomes in the blended learning groups compared to traditional learning control groups, with positive standardized mean differences. However, one study [[29](#ref29)] did not find a statistically significant difference in knowledge outcomes (standard mean difference 0.08, 95% CI –0.63 to 0.79). The pooled effect for knowledge outcomes across these studies suggested no significant overall effect of offline blended learning compared to traditional education alone (standard mean difference 0.67, 95% CI –0.50 to 1.84, I²=87.9%, n=327) (Figure 4).

Figure 4. Forest Plot: Offline Blended Learning vs. Traditional Learning

Figure 4: Forest plot comparing offline blended learning to traditional learning for knowledge outcomes. CI: Confidence Interval; df: degrees of freedom.

Online Blended Learning Versus Traditional Learning

In the comparison of online blended learning with traditional learning, 26 out of 41 studies [[34](#ref34)–37,3941,43,46,50,5355,58,60,64,6972, 74,75,77,78] demonstrated significantly better knowledge outcomes in groups receiving blended learning compared to their traditional learning counterparts. The pooled effect for knowledge outcomes showed a statistically significant standard mean difference of 0.73 (95% CI 0.60 to 0.86, n=6976) (Figure 5). However, substantial heterogeneity was observed in this pooled analysis (I²=94.9%).

Figure 5. Forest Plot: Online Blended Learning vs. Traditional Learning

Figure 5: Forest plot comparing online blended learning to traditional learning for knowledge outcomes. SMD: Standardized Mean Difference; CI: Confidence Interval; df: degrees of freedom.

Digital Learning Versus Traditional Learning

Only 3 of the 7 studies [[30](#ref30)–33,56,63,76] comparing digital learning to traditional learning reported significantly better knowledge outcomes in the digital learning groups. The pooled effect for knowledge outcomes indicated no statistically significant difference between digital blended learning and traditional learning (standard mean difference 0.04, 95% CI –0.45 to 0.52, I²=93.4%, n=1093) (Figure 6).

Figure 6. Forest Plot: Digital Blended Learning vs. Traditional Learning

Figure 6: Forest plot comparing digital blended learning to traditional learning for knowledge outcomes. CI: Confidence Interval; df: degrees of freedom.

Computer-Assisted Instruction Blended Learning Versus Traditional Learning

Among studies focusing on computer-assisted instruction (CAI) blended learning, 5 out of 8 studies [[9](#ref9)–11,38,48, 49,52,73] showed significantly higher knowledge outcomes in the CAI blended learning groups compared to traditional learning groups. One study [[38](#ref38)] reported a significant negative effect of CAI blended learning compared to traditional learning (standard mean difference –0.68, 95% CI –1.32 to –0.04). The remaining studies [[9](#ref9),52] found no significant difference. The pooled effect for knowledge outcomes suggested a significant improvement with CAI blended learning compared to traditional education alone (standard mean difference 1.13, 95% CI 0.47 to 1.79, I²=78.0%, n=926) (Figure 7).

Figure 7. Forest Plot: Computer-Assisted Instruction Blended Learning vs. Traditional Learning

Figure 7: Forest plot comparing computer-assisted instruction blended learning to traditional learning for knowledge outcomes. CI: Confidence Interval; df: degrees of freedom.

Virtual Patient Blended Learning Versus Traditional Learning

In 4 out of 5 studies [[59](#ref59),6567,79] investigating knowledge outcomes when using virtual patients (VPs) as a supplement to traditional learning, groups with VP blended learning support demonstrated significantly better knowledge outcomes than traditional learning control groups. Only one study [[67](#ref67)] reported no statistically significant difference in knowledge outcomes with VP blended learning (standard mean difference 0.13, 95% CI –0.30 to 0.56). The pooled effect for knowledge outcomes indicated significant positive effects for VP blended learning (standard mean difference 0.62, 95% CI 0.18 to 1.06, I²=78.4%, n=621) (Figure 8).

Figure 8. Forest Plot: Virtual Patient Blended Learning vs. Traditional Learning

Figure 8: Forest plot comparing virtual patient blended learning to traditional learning for knowledge outcomes. CI: Confidence Interval; df: degrees of freedom.

Sensitivity Analyses

Initial subgroup analyses did not fully explain the high heterogeneity observed across the overall results. Further exploration considered differences in blended learning effectiveness across various health professions disciplines. A majority of studies (30/56, 54%) included medical students as participants.

Analyzing knowledge outcomes specifically for medicine, nursing, and dentistry revealed some notable differences. The pooled effect for studies in medicine showed a standard mean difference of 0.91 (95% CI 0.65 to 1.17, z= 6.77, I²=95.8%, n=3418, P<.001) (Figure 9), nursing studies showed a standard mean difference of 0.75 (95% CI 0.26 to 1.24, z=2.99, I²=94.9%, n=1590, P=.008) (Figure 10), and dentistry studies showed a lower standard mean difference of 0.35 (95% CI 0.17 to 0.53, z=3.78, I²=37.6%, n=1130, P=<.001) (Figure 11). Dentistry studies included 3 online blended learning studies (standard mean difference 0.37, 95% CI 0.14 to 0.64, z=2.63, I²=58.3%, n=879), 1 virtual patient learning study, and 1 computer-assisted instruction learning study.

Figure 9. Forest Plot: Blended Learning vs. Traditional Learning – Medical Students

Figure 9: Forest plot comparing blended learning to traditional learning for knowledge outcomes specifically in medical students. CI: Confidence Interval; df: degrees of freedom.

Figure 10. Forest Plot: Blended Learning vs. Traditional Learning – Nursing Students

Figure 10: Forest plot comparing blended learning to traditional learning for knowledge outcomes specifically in nursing students. CI: Confidence Interval; df: degrees of freedom.

Figure 11. Forest Plot: Blended Learning vs. Traditional Learning – Dentistry Students

Figure 11: Forest plot comparing blended learning to traditional learning for knowledge outcomes specifically in dentistry students. CI: Confidence Interval; df: degrees of freedom.

Further analysis revealed notable findings for offline blended learning in nursing compared to traditional learning (standard mean difference 1.28, 95% CI 0.25 to 2.31, z=2.43, I²=86.2%, n=249), and in computer-assisted instruction (standard mean difference 0.53, 95% CI 0.17 to 0.90, z=2.84, I²=23.9%, n=174), but not for online blended learning in nursing (standard mean difference 0.68, 95% CI –0.07 to 1.45, z=1.76, I²=96.7%, n=1091).

Additional interest was observed for digital blended learning compared to traditional learning specifically in medicine (standard mean difference 0.26, 95% CI 0.07 to 0.45, z=2.71, I²=95.6%, n=417) [[31](#ref31)–33,76], in virtual patient approaches (standard mean difference 0.71, 95% CI 0.14 to 1.28, z=2.45, I²=85.8%, n=416) [[65](#ref65)–67,79], in online modalities (standard mean difference 1.26, 95% CI 0.81 to 1.71, z=5.49, I²=96.1%, n=1879) [[34](#ref34),36,38,39,41,43,44,46,47,50,53, 64,6972,74,78], and in computer-aided instruction (standard mean difference 2.1, 95% CI 0.68 to 3.44, z=2.91, I²=97.9%, n=706) [[11](#ref11),38,48,49,73]. These findings generally suggest more pronounced positive effects of blended learning over traditional learning alone in medical education.

Risk of Bias

The summary of risk of bias assessment across the included studies is presented in Figure 12. The risk of bias related to outcome assessment was mitigated in many studies through the use of automated assessment instruments, resulting in a rating of low risk for 50 out of 56 studies. However, the validation of these instruments remained unclear in some cases. Attrition bias was considered within acceptable levels in a portion of studies (low risk in 24 of 56 studies), but the potential for voluntary bias and its influence on the estimated effect could not be entirely excluded. Reporting bias was assessed as low in 28 of the 56 studies.

Figure 12. Risk of Bias Summary

Figure 12: Risk of bias summary for included studies. (+ low risk of bias; – high risk of bias; ? unclear risk of bias).

Allocation bias is not considered a major concern in this review. If studies described an adequate randomization method or provided an unclear description, it was assumed that randomization was unlikely to be fundamentally flawed. Performance bias related to traditional learning might be present, but inherently difficult to eliminate in this type of educational research. Blinding of participants in blended learning comparisons is possible but remains uncommon in the current literature. Due to the substantial heterogeneity among included studies, a reliable estimation of publication bias is challenging.

Discussion

Principal Findings

This meta-analysis provides several key findings regarding the effectiveness of blended learning in health professions education. First, blended learning demonstrates a consistently large and positive effect on knowledge acquisition compared to traditional learning (standard mean difference 1.07, 95% CI 0.85 to 1.28). A plausible explanation for this enhanced effectiveness is that blended learning environments often allow students to revisit electronic materials as needed and learn at their own pace, potentially optimizing learning performance [[80](#ref80)]. Even after adjusting for potential publication bias, the trim and fill method still indicated a significant positive effect, albeit reduced (standard mean difference 0.41, 95% CI 0.16 to 0.66), reinforcing the overall benefit of blended learning. This meta-analysis strengthens previous findings [[12](#ref12)], although the significant heterogeneity across studies necessitates careful interpretation. Subgroup analyses based on participant type partially addressed these variations.

The effectiveness of blended learning is multifaceted and contingent on the appropriateness of the evaluation approach. Evaluations should ideally occur before implementation to identify needs, consider participant characteristics, analyze context, and gather baseline data [[81](#ref81)]. Some interventional studies have highlighted blended learning’s potential to improve course completion rates, enhance retention, and increase student satisfaction [[82](#ref82)]. However, comparisons of academic achievement or grade distributions between blended learning and traditional environments have not consistently shown significant differences [[83](#ref83)]. Ultimately, the success of blended learning likely depends on a complex interplay of student characteristics, instructional design features, and specific learning outcomes. Learner success is influenced by factors such as technical proficiency, comfort with technology, and skills in computer operations and internet navigation. Thus, prior experience with the internet and computer applications can significantly impact a learner’s ability to thrive in a blended learning environment. Studies suggest that motivation is a crucial factor for success in blended learning, with highly motivated students demonstrating greater persistence in online courses [[84](#ref84)]. Furthermore, effective time management is a critical determinant of success in online learning components of blended learning [[85](#ref85)].

Second, offline blended learning did not show a significant pooled positive effect compared to traditional learning. Notably, two of the three studies in this category focused on nursing education. These findings align with previous meta-analyses on offline digital education [[86](#ref86)]. Despite this, offline education offers potential advantages, such as unrestrained knowledge dissemination and improved accessibility of health education in resource-limited settings [[87](#ref87)]. Effective offline interventions could leverage interactive, associative, and perceptual learning experiences through text, images, audio-video, and other multimedia components [[88](#ref88),89].

Third, the effect of digital learning on knowledge outcomes was inconsistent across overall and subgroup analyses. While the overall analysis of 8 digital blended learning studies showed no significant effect compared to traditional learning, a positive effect was observed in the medicine subgroup (standard mean difference 0.26, 95% CI 0.07 to 0.45). Previous research has yielded similar mixed results [[18](#ref18),90]. However, George et al. [[18](#ref18)] demonstrated the effectiveness of digital learning for undergraduate health professionals compared to traditional learning in certain contexts.

Fourth, the analysis of 10 studies related to computer-assisted instruction (CAI) revealed a significant positive difference in knowledge acquisition outcomes. This difference was even more pronounced in the medicine subgroup. However, this finding should be interpreted cautiously due to the high level of heterogeneity observed in both the overall CAI analysis (I²=78.0%) and the medicine subgroup (I²=97.9%). Previous studies have suggested that CAI can be as effective as traditional learning [[91](#ref91)], but these studies also often exhibit high heterogeneity, requiring cautious interpretation. A comparative approach focusing on variations in intervention design, sample characteristics, and learning context is needed to better understand the effectiveness of CAI in blended learning. It’s also important to consider that CAI might be perceived negatively by some students, potentially impacting learning outcomes.

Fifth, the study by Al-Riyami et al. [[92](#ref92)] highlighted potential challenges related to technology access. Participants reported difficulties accessing course materials due to network issues with the university server and internet connectivity, limiting the full utilization of asynchronous discussion boards. It’s important to note that such challenges can arise regardless of the online component of a course. In traditional learning, students may choose not to participate in discussions, and internet connectivity issues can occur in various settings. This underscores that local conditions and infrastructure, rather than inherent limitations of the modality, can influence the preference for one instructional approach over another.

Sixth, virtual patient (VP) blended learning demonstrated an overall positive pooled effect on knowledge outcomes compared to traditional learning, a finding consistent with a previous meta-analysis [[93](#ref93)]. This study further supports evidence from prior reviews [[94](#ref94),95] that have included studies since 2010. However, VP simulations primarily target skill development rather than knowledge acquisition. This might explain the relatively smaller number of studies and the moderate added value of VPs in enhancing knowledge outcomes compared to traditional learning. VPs are particularly impactful in skills training, problem-solving application, and situations where direct patient contact is limited [[93](#ref93)]. As proposed by Cook and Triola [[96](#ref96)], VPs serve as a valuable modality for learners to actively develop and refine clinical reasoning and critical thinking abilities prior to bedside learning [[96](#ref96)]. Nevertheless, some nuances exist. Studies comparing different instructional methods within VP simulations suggest that narrative VP designs might be more effective than highly autonomous, problem-oriented designs, indicating a potential need for structured guidance [[97](#ref97)]. Furthermore, human feedback within VP systems has been shown to enhance empathy more effectively than animated backstories [[98](#ref98)], although simply learning from a VP scenario, even without feedback, can yield positive outcomes [[99](#ref99)]. This highlights that while realistic patient scenarios and learner autonomy are valuable, neglecting instructional guidance related to learning objectives within VP simulations can limit their effectiveness [[100](#ref100)].

Strengths and Limitations

This meta-analysis possesses several notable strengths. Evaluating the effectiveness of blended learning in health professions education is timely and highly relevant for both educators and learners in the field. The study intentionally adopted a broad scope, encompassing diverse learning topics and including studies with learners from various health professions.

The participant samples in this study encompassed a wide range of health professionals (medicine, nursing, dentistry, and others) across diverse health care disciplines. While this heterogeneity might contribute to the overall high level of heterogeneity observed, the moderate to large effect sizes found in most subgroup analyses exploring variations in participant types suggest that blended learning holds promise across various health learning disciplines.

However, certain limitations should be acknowledged. The systematic literature search was limited to a single database (MEDLINE), although with broad search terms and few exclusion criteria. The quality of the meta-analysis is inherently dependent on the quality of the data reported in the included studies. Although standard deviations were imputed for some interventions with missing data using the average standard deviation from other studies, this approach introduces a potential for error. Subgroup analyses, particularly those based on study design, country socioeconomic status, and outcome assessment, should be interpreted cautiously due to the absence of a priori hypotheses in some cases. Furthermore, the variability in study interventions, assessment instruments, and contextual factors, which were not comprehensively assessed, represents potential sources of heterogeneity and necessitates careful interpretation of the results. Finally, evidence of publication bias was detected.

Conclusions

This study has significant implications for research and practice in blended learning within health professions education. Despite limitations related to heterogeneity across studies, the findings of this meta-analysis reinforce the potential of blended learning to positively impact knowledge acquisition in health professions education. Blended learning appears to be a promising and worthwhile approach for broader application in health professions training. The observed variations in effectiveness across subgroup analyses of health professions indicate that different implementations of blended learning courses can yield varying levels of effectiveness. Therefore, researchers and educators should prioritize investigating effective strategies for implementing blended learning courses. Future research directly comparing different blended learning instructional methods could provide valuable insights into optimizing its application in health education.

Appendix

Multimedia Appendix 1

Supplemental appendix: Detailed characteristics of included studies.

jmir_v22i8e16504_app1.docx (82.1KB, docx)

Footnotes

Authors’ Contributions: AV: Conceptualization, Design, Original idea, Statistical analyses, Manuscript writing. ES: Article screening and selection, Manuscript writing. AC: Manuscript writing. JB: Manuscript writing.

Conflicts of Interest: None declared.

References

[References]

Associated Data

Supplementary Materials

Multimedia Appendix 1

Supplemental appendix.

jmir_v22i8e16504_app1.docx (82.1KB, docx)

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *