AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,105 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Optimizing the Performance of Water-processed LiNi0.6Mn0.2Co0.2O2 Battery Half Cells...
Chirag Patel
William Poizer

Chirag Patel

and 14 more

March 27, 2026
Water-based cathode slurries for lithium-ion batteries can replace polyfluoroalkyl binders (PFAS) with biodegradable alternatives. Dispersing carbon black in water is the main formulation challenge. For the first time, an amphiphilic block copolymer, Pluronic F68, is blended with a water-soluble homopolymer, polyethylene oxide, to simultaneously disperse carbon black in water while controlling the viscosity of the slurry. Two series of binder blends are prepared: one systematically varies the viscosity of the binder solution, and the other systematically varies the solid loading of the slurry. These orthogonal series demonstrate that the polymer surfactant has a strong effect on half-cell performance independent of the role of viscosity. At the optimum formulation, water-processed half-cells using LiNi0.6Mn0.2Co0.2O2 achieve a first specific discharge capacity up to 166 mAh g–1 and capacity retention of 89% after 100 cycles at 1C, comparable to cells made using a PFAS binder. Water-processed cells outperform the conventional cell at a fast charging rate of 5C. The results suggest that the affinity of the polymer binder(s) to carbon black, measured via adsorption, can be used as a selection criterion to identify binders that optimize the electrochemical performance of water-processed battery cells.
Sustainable Bio-Inspired Anisotropic Proton Exchange Membranes by Self-Assembly of Bi...
Antoine Aynard
Emilie Planes

Antoine Aynard

and 2 more

March 27, 2026
A new class of bio-inspired polymer membranes was elaborated using block copolymers BCP self-assembly. The BCPs are based on a proton conductive block, i.e. sulfonated poly pentafluorostyrene (sPPFS), paired with a highly hydrophobic monomers derived from biomass, i.e. terpen acrylate. With either cyclic ThymylAcrylate TA or linear TetraHydroGeraniolAcrylate THGA, both blocks with high or low glass transition temperatures Tg values were tailored and evaluated for film formation. PTA-b-PPFS or (PTA-s-PTHGA)-b-PFFS copolymers were first synthesized via nitroxide-mediated polymerization NMP, followed by a post-introduction of sulfonated groups through para fluoro/thiol reaction to introduce the ionic groups. The copolymer composition, i.e. various PFS contents, as well as the addition of an ionic crosslinker PBI influence the membrane morphologies and properties, i.e. mechanical strength, water uptake or conductivity. The immiscibility between the sPPFS and PolyTerpenAcrylates blocks was used as advantage to generate anisotropic segregated domains by creating bio-inspired nano-channels for proton conduction to further boost the through-plane membrane conductivity. Herein, using the bio-inspired strategy, more sustainable anisotropic membranes with up to 50 wt% of renewable bio-sourced carbon and only 15 wt% of Fluorine were tailored with similar features of benchmarked isotropic perfluorinated membranes.
Thinking Cybersecurity PART III - CHAPTER 9  Synthetic Data for Red-Teaming and Threa...
Paulo H. Leocadio

Paulo H. Leocadio

March 30, 2026
Chapter 9Synthetic Data for Red-Teaming and Threat SimulationPaulo H LeocadioIntroductionSynthetic data is not an auxiliary technique in cybersecurity research; it is a structural necessity. As defensive systems become increasingly autonomous, the absence of adversarial pressure during design and evaluation becomes a liability. Real-world attack data is sparse, sensitive, and historically biased. Consequently, defensive architectures that rely exclusively on observed incidents risk overfitting the past.This chapter examines synthetic adversarial modeling as a control instrument within cognitive defense architectures. It formalizes the generation of simulated attacker behavior, analyzes the role of infrastructure in constraining autonomous execution, and explores how platforms, privacy controls, vault mechanisms, and bounded execution surfaces jointly govern safe autonomy. The objective is not to increase automation, but to structure adversarial pressure within enforceable infrastructure constraints.Scenario framingWhy do synthetic adversaries exist? Before discussing platforms and vaults, we must answer a prior question: Why simulate adversaries at all?Real-world cybersecurity data is subject to three structural limitations. First, labeled multi-stage intrusion datasets remain scarce and often exhibit synthetic artifacts or outdated threat patterns (Ring, et al. 2019). Second, telemetry used for machine learning often contains noisy labels and exhibits non-stationary distributions, which limit generalization (Anderson and McGrew 2017). Third, legal and privacy constraints restrict access to realistic operational datasets, impeding large-scale experimentation,Scarcity of labeled multi-stage attacksLegal/privacy constraints on telemetry sharingBias toward known threat patternsSynthetic adversarial modeling addresses these limitations by enabling controlled generation of attack sequences under reproducible conditions (Ring, et al. 2019, Anderson and McGrew 2017). Recent surveys demonstrate the growing use of generative adversarial networks and related techniques for intrusion detection benchmarking and traffic simulation (Dunmore, et al. 2023, Gimenez 2025, Steinel, Lim and Ku 2025, Cullen, et al. 2022). Unlike static datasets, generative approaches allow structured variation across attack stages and intensity.Consider a municipal digital infrastructure (aligned with your prior government consulting background). The environment includes:Identity servicesCitizen portalsPayment systemsCloud-managed infrastructureAdversarial simulation further aligns with structured red-teaming methodologies, where defensive systems are stress-tested through controlled hostile interaction (Ganguli, et al. 2022). The generative mechanisms used in synthetic threat modeling are conceptually grounded in adversarial learning frameworks, first formalized in generative adversarial networks (Goodfellow, et al. 2014) and later refined for improved stability (Arjovsky, Chintala and Bottou 2017).From a governance perspective, reliance solely on observed incidents contradicts established risk management doctrine, which emphasizes proactive scenario modelling and structured threat assessment (Blank and Gallagher 2012, Shostack 2014, Brundage, et al. 2020).Waiting for real adversarial events to evaluate detection agents is both unethical and operationally negligent. Instead, we constructprogrammable threat environments (Blank and Gallagher 2012, Shostack 2014, Brundage, et al. 2020).Synthetic data is not mock data; it is deliberately applied counterfactual adversarial pressure to cognitive defense systems.9.1 Platforms as cognitive substratesThe discussion that follows is organized around four interdependent infrastructural dimensions: platforms, privacy controls, vault boundaries, and execution surfaces. Together, these define the containment geometry within which synthetic adversarial pressure can be safely applied.Platforms are not neutral deployment environments. They encode assumptions about identity, persistence, isolation, observability, and control , all of which directly shape the behavior of autonomous or semi-autonomous systems. In cognitive defense architectures, the platform serves as the substrate that constrains reasoning, action, and auditability.Modern cloud platforms (from global cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform) provide elastic compute and managed services. However, elasticity alone is not the relevant property. The decisive factor is whether the platform allowsbounded execution with enforceable guarantees .As introduced in Chapter 7 , agent decisions operate across coordinated data, model, and policy states. Synthetic adversarial modeling extends that structure by treating attacks as staged transitions through these layered decision boundaries rather than isolated intrusion events.Figure 1 shows that the layered interaction between data interpretation, model inference, and policy gating produces a sequence of decision states. Synthetic adversarial trajectories operate across these layers rather than targeting a single detection mechanism.
Environmental Patch Dynamics: A Vertical Metacommunity Framework Emerging from Marine...
Yeray Gonzalez-Marrero
Sabrina Clemente

Yeray González Marrero

and 5 more

March 27, 2026
In metacommunity theory, which has primarily addressed horizontal processes, vertical connectivity has received little attention despite being a primary driver of biodiversity in spatially structured environments. This conceptual blind spot limits our understanding of community assembly in marine environments, where vertical processes are fundamental. Here, we used marine cave sessile communities as a model system to develop and test the first mechanistic framework to explicitly incorporate vertical connectivity, which we applied across hierarchical spatial scales. Based on a two-year dataset including nine marine caves in the Canary Islands, we partitioned community variance, examined spatial patterns of beta diversity, and analysed autocorrelation. Three-dimensional bathymetric analysis quantified connectivity to deep-water habitats, and mixed-effects models tested its influence on community diversity and thermal affinities. Cave-scale variance exceeded island-scale variance, a pattern consistent with a pronounced “cave individuality”. Species turnover drove beta diversity, and distance decay was weak — a pattern inconsistent with horizontal dispersal models. Bathymetric connectivity was a key predictor of diversity: caves more connected to deep-water habitats supported richer assemblages. These patterns reflect shifts in thermal structure, with well-connected caves exhibiting greater thermal diversity and stronger cold-water affinities. We propose the Environmental Patch Dynamics (EPD) framework, a mechanistic extension of metacommunity theory integrating species sorting and patch dynamics across four hierarchical stages. Vertical recruitment from depth-stratified external propagule pools, mediated by episodic upwelling, primarily structures these communities. By incorporating vertical connectivity and benthic–pelagic coupling, the EPD framework provides a transferable architecture for understanding community assembly in other vertically-structured ecosystems and reveals clear implications for their conservation under a warming ocean.
Range sizes, but not abundance--distance relationships, are conserved globally across...
Connor Panter
Stephan Kambach

Connor Panter

and 7 more

March 27, 2026
Understanding whether current macroecological biodiversity patterns are driven by shared evolutionary history remains a central question in biogeography. Here, we examine whether species’ geographic range sizes and abundance–distance relationships (ADRs) exhibit phylogenetic signal across terrestrial taxa, and what this reveals about their evolutionary structuring. We compiled published ADRs together with global range size data and phylogenetic information for 2,545 species, including 1,685 birds, 647 plants, and 213 mammals. Variation in ADRs and range sizes was quantified across taxonomic levels and across clades of increasing phylogenetic distance, and compared with random dispersion null expectations. Phylogenetic signal was evaluated with Bloomberg’s K and Moran’s I, and alternative evolutionary models were compared to assess which processes best described the distribution of ADRs and range sizes across phylogenies. We also examined whether any detected phylogenetic structure remained after accounting for dispersal-related traits, including plant height, seed mass, and body size. We found that range size showed consistent phylogenetic clustering across most taxonomic and phylogenetic levels, indicating that closely related species tend to have similar geographic extents. Contrastingly, ADRs exhibited limited phylogenetic structure, with weak under-dispersion detected only among plant species at intermediate phylogenetic depths. Trait evolution for both ADRs and range sizes was most consistent with an Ornstein–Uhlenbeck model, suggesting convergence toward optimal values rather than unrestricted divergence. After accounting for dispersal-related traits, range size retained significant phylogenetic signal, whereas ADRs did not differ from random expectations. Together, these findings indicate that geographic range sizes, but not ADRs, are strongly structured by phylogenetic relatedness across birds, plants, and mammals. Our findings suggest that broad-scale patterns of species’ range size are more evolutionarily conserved than ADRs, implying a fundamental decoupling between macroecological and population-level processes.
The Soil Museum in Yazd
Maziyar Akbari

Maziyar Akbari

March 30, 2026
This article examines the Soil Museum in Yazd as an innovative project that merges environmental education with the conservation of desert vernacular architecture. The museum highlights soil as "the living skin of the Earth" an essential yet frequently overlooked component supporting life, biodiversity, and civilization. Yazd, located in Iran's central desert, features a rich architectural tradition of mud-brick construction, windcatchers, and climate-adapted design. However, this heritage is threatened by modernization, declining traditional skills, and disregard for indigenous knowledge. The Soil Museum educates visitors on soil diversity worldwide, including Meybod's colorful volcanic soils, while emphasizing the profound relationship between soil, culture, and architecture. Analyzing the museum's exhibits and educational mission, this paper demonstrates soil's role as a fundamental link connecting natural systems and the built environment. The research contends that protecting soil and preserving vernacular architecture are interdependent pursuits, crucial for sustaining cultural identity and ecological balance. The Yazd Soil Museum provides an inspiring precedent for projects integrating scientific knowledge, cultural heritage, and sustainable design, contributing to the renewal of traditional architectural wisdom in contemporary practice.
Spondylolisthesis and tuberculosis
Taruna Penmetcha
Gunar Subieta

Taruna Penmetcha

and 1 more

March 27, 2026
Background: Spondylolisthesis is a frequent cause of low back pain but is seldomly associated with infectious causes and multisystem failure. Case Presentation: We present a patient who complained of severe low back pain and acute onset of neurological deficits that progressed to cardiovascular collapse and developed into a complex diagnostic and
Working Principle of an Infrared Search and Track (IRST) System
Chinnaraji Annamalai

Chinnaraji Annamalai

March 30, 2026
Infrared Search and Track (IRST) systems represent a critical component of modern electronic warfare and aerial surveillance. Unlike active radar, which emits detectable radio frequency waves, IRST is a passive sensor technology that identifies and tracks targets based solely on their thermal signatures. This article explores the fundamental architectural components of an IRST system, including the lens aperture, focal length, and the detector array. By analyzing the relationship between the target's infrared heat signature against the cold atmospheric background, we define the geometric and optical parameters necessary for long-range detection. The study highlights the system's ability to provide high-angular resolution and stealthy operation in contested environments where traditional radar might be jammed or detected.
Investigating genetic spillage in a Mimulus cardinalis (syn. Erythranthe cardinalis)...
Lydia Duran
Niveditha Ramadoss

Lydia Duran

and 3 more

March 27, 2026
Background and Aims Common gardens are critical to studying the trait variations in plant species and how environmental or genetic factors influence them. In common gardens, non-native species and non-local populations must be monitored closely to reduce their chances of escaping, as it can lead to genetic spillage, which is the introduction of foreign genetic material with potential unwanted phenotypes/genotypes into the native ecosystems. This could decrease the fitness of native populations and threaten natural habitats. This study investigated a possible spillage of genetic material from an existing common garden experiment of the Scarlet Monkeyflower. This garden consists of six populations that originated from three regions across its distribution (North, Central, and South), and were propagated at an Ecological Reserve in Southern California. A population of E. cardinalis was spotted a kilometer away from the common garden, leading to the question of whether it is a wild population or an escapee. Methods Leaf samples were collected from individuals of each provenance established at the common garden, as well as the “unknown” population. DNA was extracted from all samples and further processed for DArT sequencing to generate genome-wide SNPs. Genetic clustering analyses such as DAPC, PCoA, and ADMIXTURE were used to determine the genetic proximity of the unknown population against the six provenances from the common garden. Key Results Clustering analyses of all individuals revealed that the unknown population is genetically related to southern populations. A focused analysis of only the southern and unknown populations showed that the unknown individuals formed a distinct cluster, suggesting they are not escapees but represent a natural population in that region. Conclusion Although a spillage was not detected, these findings serve as a reminder for researchers to monitor their garden experiments to control escapees and reduce the possibility of a spillage from occurring.
Divergent environmental drivers of meadow and steppe productivity in the overlapping...
Xiaoyu Zou
Jianzhou Wei

Xiaoyu Zou

and 5 more

March 27, 2026
Meadow and steppe ecosystems in the overlapping area of the Qinghai–Tibet Plateau and the Loess Plateau (OQLP) are vital to global carbon cycling and ecological security. However, the driving mechanisms underlying the dynamics of meadow and steppe productivity under a shared climatic context remain unclear, limiting our understanding of their patterns of change. Herein, an entropy-weighted integrated productivity index (IPI) was developed using 1-km MODIS normalized difference vegetation index (NDVI) and net primary productivity (NPP) products (2000–2023) accompanied by 153 field biomass plots. Random forest, variation partitioning, and confirmatory path analysis (CPA) modeling were applied to disentangle the key climatic, soil, and topographic drivers and potential mechanisms. Results showed significant increases over time in both NDVI (slope = 0.0023) and NPP (slope = 6.15 gC m-2 yr-1) across the OQLP, with spatial heterogeneity marked by greater improvements in central valleys and mountainous areas. Random forest models showed growing season climate factors (precipitation, temperature, and evapotranspiration) explained 58 to 63% of productivity variations in the meadow and steppe, respectively, indicating these were the dominant drivers. However, ecosystem-specific differences emerged: meadow productivity was strongly influenced by soil pH and topography whereas steppe productivity was more sensitive to soil physicochemical properties (e.g., texture, cation exchange capacity) and non-growing season temperatures. Variation partitioning indicated the growing season climate accounted for the largest unique variance (~28% in meadows), while path analysis demonstrated more complex causal pathways in meadows (eight significant paths) than in steppes (six paths). These findings highlight the need for ecosystem-specific management strategies that account for distinct environmental sensitivities in the face of climate change.
Design and Evaluation of AI-Powered Framework for Improving Data Quality in Disease a...
Seyyedeh Fatemeh Mousavi Baigi
Masoumeh Sarbaz

Seyyedeh Fatemeh Mousavi Baigi

and 4 more

March 27, 2026
Data quality is essential for effective decision-making and evidence generation in health systems. Despite the increasing use of disease and health outcome registries, many systems suffer from missing, inconsistent, and inaccurate data, limiting their value for policy-making and service improvement. This study aims to develop and evaluate an AI (Artificial intelligence)-powered framework to enhance data quality in disease and health outcome registries by supporting key stages of data management, including data entry, evaluation, anomaly detection, and correction. This study adopts a mixed-methods design, conducted in five interlinked phases. Phase 1 includes systematic reviews of AI frameworks and data quality dimensions. In Phase 2, a set of data quality indicators will be identified and prioritized using focus groups and the Analytic Hierarchy Process (AHP). Phase 3 involves the design and Delphi-based validation of a comprehensive data quality management framework. Phase 4 will develop AI models to detect and address anomalies in the data. Phase 5 will test these models on a selected disease registry, assessing their effectiveness in improving data quality. Qualitative and quantitative data will be analyzed using thematic analysis,and statistical techniques. Ethical approval has been obtained from the Ethics Committee of Mashhad University of Medical Sciences (Approval Code: IR.MUMS.FHMPM.REC.1404.226). The study findings will be disseminated through peer-reviewed journals, conferences, and policy briefs. The results will also be presented in national and international forums, facilitating knowledge sharing and the potential adoption of the framework in other contexts.
Neighbor-Corroborated Kalman Filtering for Robust Distributed Multi-Object Tracking
Vahid Ghorbani

Vahid Ghorbani

and 5 more

March 30, 2026
Robust multi-object tracking (MOT) in distributed sensing networks is increasingly challenged by adversarial manipulation of measurements. In particular, stealthy ghost attacks generate observations that mimic genuine targets, making them difficult to distinguish using conventional trackers. While the ALARM (Average Likelihood for Attack-Resilient Multi-object) framework has recently been introduced within the Random Finite Set (RFS) paradigm, many practical systems rely on computationally efficient Kalman-based pipelines. This paper bridges this gap by proposing KF-ALARM, a distributed filtering framework that embeds the ALARM principle into the Kalman update. The method leverages cross-node track corroboration to modulate the measurement likelihood, attenuating the influence of adversarial measurements. The resulting approach achieves attack-resilient distributed MOT while preserving the efficiency and scalability of Kalman-based tracking architectures.
Explainable AI for Root Cause Analysis in Large-Scale Datacenter Networks
Dheeraj Ramasahayam

Dheeraj Ramasahayam

March 30, 2026
Black-box RCA models are difficult to trust in datacenter operations. This paper presents a real-data-driven explainable RCA framework that combines topology-aware temporal attention with operator-facing explanations. The benchmark is built from public CAIDA passive 100G statistics and MAWI samplepoint-F summaries, yielding 162 captures from January 1, 2024 through March 25, 2026. Because public traces do not expose switch-level RCA labels, controlled localized incidents are injected on top of real traffic windows in a Clos topology. On a 9-node benchmark with 500 windows of length 6, the temporal XAI model achieves 100% failure accuracy and 100% root-cause accuracy, matches Random Forest and LSTM baselines, outperforms a non-attention GNN on failure detection, and concentrates 76.1% of explanation mass in the top three nodes. Estimated operator RCA time drops from 10.15 to 5.33 minutes. Across 9-, 52-, and 104-node fabrics, the model maintains 100% failure and root-cause accuracy while explanation compactness declines from 0.66 to 0.15, motivating hierarchical pod-and rack-level aggregation.
The Neutron Star: Its Layered 4-D Rotor Structure and Transition into a Black Hole
Stephen Euin Cobb

Stephen Euin Cobb

March 30, 2026
Neutron stars have long been modeled as three-dimensional spheres of degenerate nuclear matter held up by quantum pressure, yet this picture leaves major questions unanswered: how magnetars sustain fields exceeding 10¹¹ T for millennia, why rotational glitches occur in discrete quanta, and why gravitational-wave data imply stronger central compactness than nuclear equations of state predict. In earlier work, subatomic particles were described as four-dimensional (4-D) curvature rotorslocalized standing patterns of torsion whose internal rotation projects as spin, charge, and magnetic moment. Here we extend that geometry to stellar scale. A neutron star is modeled as a two-radius system: a compact torsion core (r_core R_curv) containing phase-locked 4-D rotation, surrounded ≪ by a macroscopic curvature envelope (R_curv ≈ 10-13 km) that defines the observed photosphere. Between these radii lie three previously unrecognized regions-the shear membrane, torsionpressure gradient zone, and torsion halo-each with distinct physical roles. This layered structure reproduces magnetar stability, quantized glitches, and low tidal deformability, while predicting subtle polarization and redshift anomalies. A stability analysis shows that collapse to a black hole occurs when the torsional pressure P_torsion ≈ ρ Ω₄² r_core² / κ falls below the gravitational pressure P_g ≈ G M² / R_curv⁴. The "black-hole boundary" thus marks the loss of 4-D phase coherence, not merely a density threshold. This framework unifies nuclear and astrophysical curvature within a single geometric model of matter. Editor's Note (November 5, 2025 Revision) Since the completion of this paper, later studies (R81 and R82) have extended the model of neutron-star geometry to include curvature-feedback thermodynamics. While the mechanical account of collapse presented here remains valid, readers are directed to the Addendum at the end of this document-"Curvature-Coherence Interpretation in Light of R82"-for the updated explanation in which black-hole formation is recognized as the final stage of geometric self-cooling rather than purely a mechanical failure.
Gamification vs.  Game-Based Learning: Differential Effects on Student Motivation in...
Sayed Mahbub Hasan Amiri

Sayed Mahbub Hasan Amiri

and 5 more

April 02, 2026
Abstract Thus, student motivation in science, technology, engineering, and mathematics (STEM) is a chronic problem, encouraging educators to include game elements in the allocated instruction. But two separate methodologies, gamification (adding elements from games, like points and badges, to non-game contexts) and game-based learning (employing full-bore games as the primary learning vehicle), are often confused with one another in both practice and research. This study aims to differentiate the effects of these factors on student motivation in secondary STEM classrooms. We will use a quasi-experimental design within six middle school science classes (N = 144). Three classes will receive a gamified adaptation of the standard curriculum, while three other classes will interact with a purpose-built educational game covering the same learning objectives. A control group will be taught traditionally. Intrinsic Motivation Inventory (IMI) will be administered pre- and post-intervention; additional qualitative data will be collected via semi-structured interviews. While both interventions are expected to lead to better motivation than conventional instruction, it is hypothesised that game-based learning will have a greater positive impact on intrinsic motivation and situational interest owing to its immersive narrative and authentic problem-solving contexts. On the other hand, gamification is believed to hold a more prominent role when it comes to extrinsic motivation and achieving task completion in the short run. Results will provide empirical guidance/useful case evidence for educators and instructional designers determining when to implement which game-informed strategies in order to facilitate sustained engagement in STEM.IntroductionBackgroundThe continuous drop in student motivation has been identified as a major concern for educators and policymakers around the world within science, technology, engineering, and mathematics (STEM) classrooms [1]. Although STEM literacy is considered essential for economic competitiveness, innovative minds in workforce development [1], surveys of students consistently show that interest in subject areas related to STEM declines markedly during the middle and secondary school years [2]. Traditional teaching methods, which typically involve lecture-style transmission of information and pre-packaged problem sets divorced from real-world context, often do not satisfy the basic psychological needs for autonomy, competence, and relatedness, three elements described in self-determination theory as essential for intrinsic motivation [3]. To address this motivational crisis, educators have increasingly adopted game-informed pedagogies [4]. The premise is solid: Digital games are carefully crafted to ensure abiding interest through challenge, feedback, narrative, and agency elements that closely map onto principles of effective learning spaces [5]. As a result, two different but often conflated strategies emerged: gamification, incorporating game-design elements such as points, badges, and leaderboards into non-game education [5], and Game-Based Learning (GBL): using full-blown games as the main vehicle for delivering content and getting students to practice skills [6]. The proliferation of these approaches mirrors a wider trend towards learner-centred, interactive pedagogies; the lack of conceptual clarity about their specific mechanisms is an ongoing challenge for both research and practice.Problem StatementAlthough gamification and game-based learning can rely on a similar foundation of game-inspired design, they are in fact two distinct instructional strategies grounded in different psychological and pedagogical mechanisms [7]. Gamification works by applying motivational affordances to preexisting curricular content, usually making use of extrinsic motivators to help ensure completion of tasks and compliance with desirable behaviours [8]. In contrast, game-based learning embeds the outcome objectives within the core mechanics and thematic structure of a game, striving to create intrinsic motivation through immersion, problem-solving, and authentic contextualization [9]. Despite these theoretical differences, educational literature as well as classroom practice often use the two terms interchangeably, leading to what some scholars have called “conceptual slippage” [10]. This conflation has meaningful consequences for practice: educators may add gamified elements through implementation in the expectation that doing so will cause deep engagement like game-based learning, or conversely, they may deploy complex educational games when simpler gamification strategies can all that is needed to achieve their intended outcomes [11]. From a research perspective, the absence of comparative studies that also consider the differential effects these approaches have on specific motivational outcomes has led to fragmented evidence bases that provide only general guidance for instructional design [12]. Earlier studies have tended to focus on one approach in isolation, and many of them do not evaluate whether the motivational impact is due to the specific game elements or simply general pedagogical factors such as novelty (of games) or teacher enthusiasm [13]. Furthermore, most existing studies are limited to short-term engagement metrics that do not specify between intrinsic or extrinsic motivational pathways [14], creating an important gap in understanding how such approaches differentially shape students’ longer-term interest in STEM domains.Purpose and Research QuestionsThis research will help to fill this gap by systematically comparing the differential effects of gamification and game-based learning on student motivation in secondary STEM classrooms. Instead of viewing motivation as a unitary construct, this work separates intrinsic motivation (engagement due to inherent interest and pleasure) from extrinsic motivation (engagement driven by external incentives or performance pressures), reasoning that these different forms of motivation may respond differently to game-informed techniques [15]. Using a quasi-experimental design that delineates the aspects of the described intervention conditions, this study aims to provide empirical clarity on which approach produces better outcomes for some motivational targets. As such, this study is guided by the following research questions:RQ1: How does intrinsic motivation differ between the gamified group and the game-based learning group?RQ2: How do extrinsic motivation and task engagement in both conditions differ?RQ3: What do students feel about each approach in terms of enjoyment, relevance, and perceived learning? In particular, these questions aim to both quantify differences in motivational outcomes as well as qualitatively describe students’ subjective experiences, thereby providing a more holistic view of how each strategy plays out in real-life classroom scenarios.Significance of the StudyThis study contributes to the body of knowledge in educational technology, both theoretically and practically. What theoretically expands self-determination theory by exploring how different game-informed strategies satisfy or thwart learners’ basic psychological needs for autonomy, competence, and relatedness differentially [3]. Although self-determination theory has been applied widely to explain motivation in traditional and digital learning contexts, few studies have explicitly examined how the underlying structural differences between gamification and game-based learning can cater to these fundamental needs [16]. This study enhances our understanding of the mechanisms that may underlie these motivational processes by mapping intervention characteristics onto theoretical constructs. Practically, findings will yield actionable guidance for STEM educators, instructional designers, and curriculum developers who must choose appropriate approaches at the intersection of game design with informal learning [17]. For applications where instant task completion and behavioural engagement are important objectives, gamification serves as a low-resources solution; for situations where in-depth conceptualisation and an enduring interest in the subject are of greatest concern, game-based learning might reflect a higher payback [18]. Moreover, by deconstructing motivational results, this study provides practitioners with specific insights that could aid in aligning pedagogical practices with different learning aims and optimising both instructional efficiency and resource utilisation [19]. By doing so, this study furthers the larger mission of transforming STEM education from a source of student alienation to one defined by curiosity, persistence, and genuine intellectual delight [20].Literature ReviewTheoretical Framework: Self-Determination TheoryThe theoretical framework for this study is based on self-determination theory (SDT), a macro-theory of human motivation that has been widely adopted in educational contexts [21]. Self-Determination Theory (SDT) asserts that intrinsic motivation, doing an activity for its inherent satisfaction instead of some separable consequence, thrives when three basic psychological needs are fulfilled: autonomy, competence, and relatedness [3]. Autonomy is defined as the experience of volition and psychological freedom; competence relates to the sense of being effective in one’s interactions with the environment; and relatedness refers to experiencing a meaningful connection with others [22]. Just supporting these needs leads individuals to be more intrinsically motivated, engaged, and thrive; however, thwarting these needs shifts motivation towards controlled extrinsic forms or may even lead to a complete absence of motivation [23]. In educational contexts, SDT has also been especially useful for understanding how instructional practices succeed or fail in maintaining student interest [24]. Yet, traditional approaches to STEM teaching, rooted in one-size-fits-all curricula and extrinsic grading pressures, frequently undermine both autonomy and relatedness [25], thereby reinforcing the downward trajectory of student motivation that is observed across secondary education. In contrast, game-informed pedagogies may offer ways to fulfil these psychological needs through mechanisms like choice (autonomy), scaffolded challenge (competence), and collaborative or competitive structures (relatedness) [26]. Most critically, SDT makes a distinction between intrinsic motivation (engagement in an activity for its own sake) and extrinsic motivation (engagement through the prospect of separable consequences), which is vital to understanding the divergent effects of gamification and game-based learning [27]. This study utilises SDT as a theoretical framework to analyse how each approach exerts an effect on different motivational pathways.Gamification in Education: Definitions, Mechanisms, and Empirical FindingsGamification is essentially the application of game design elements within non-game contexts [6]. In education, it usually means adding motivational affordances like points, badges, leaderboards, progress bars, and challenges to existing curricular activities while keeping the instructional content intact [28]. Usual mechanisms by which gamification can affect motivation are mainly based on behavioural and extrinsic elements, such as receiving points for immediate feedback, awarding badges when achievements are accomplished, and competitive social benchmarking using leaderboards [8]. The relatively low cost of implementing gamification and the ease with which it can be included in traditional schooling frameworks fuelled early enthusiasm [4]. It has produced inconclusive empirical support for its effectiveness in motivating behaviour. Sailer and Homner also conducted a meta-analysis of gamification studies that yielded small to moderate positive effects on cognitive, motivational, and behavioural outcomes across the studies; however, they noted significant variance in effect size as a function of contextual factors and implementation quality [29]. The most consistent positive effects are seen in extrinsic motivation and task completion metrics, with points and badges having reliable effects on how long participants engage for and participation rates [30]. However, concerns have been raised regarding the sustainability of such effects: high rates of extrinsic reinforcement may actually disrupt intrinsic motivation, a phenomenon termed the overjustification effect [31]. Furthermore, leaderboards can have harmful effects on motivation for low achievers due to decreased competence [32]. Qualitative research indicates that students do not consider gamified components as meaningful and perceive them to be superficial or manipulative when there is no incorporation with learning objectives [33]. These findings indicate that although gamification may prove beneficial in terms of driving behavioural engagement, it is still unclear whether this intervention has the capacity to facilitate deeper intrinsic interest around STEM content.Game-Based Learning: Definitions, Characteristics, and Empirical FindingsGame-based learning (GBL) is the use of fully developed digital or analog games as the main means for delivering educational material and developing skills [5]. In contrast with gamification, which adds a layer of game elements on top of existing instruction, GBL incorporates learning objectives into the core game mechanics, narrative structure in games, and problem-solving challenges [34]. Some essential features of good educational games are meaningful storytelling to give context, authentic tasks that require using knowledge, progressively increasing difficulty to cause flow, and chances to explore or discover [35]. The theoretical underpinning of GBL is rooted in constructivism, from which the notion that learners create understanding via active participation in authentic, situated contexts [36] emerges. The empirical data have largely confirmed the effectiveness of GBL in improving motivation and learning outcomes. A comprehensive meta-analysis conducted by Clark and co-authors has shown that games consistently outperformed traditional instruction for both learning and retention across a variety of subjects, with particularly strong effect sizes for STEM subjects [37]. In relation to motivation, GBL has been linked with enhanced situational interest, perceived autonomy, and effort in difficult tasks [38]. In particular, the purported immersive quality of narrative-driven games likely satisfies a psychological need for relatedness through identification with characters and meaningful social interactions in-game [39]. Longitudinal research has shown that GBL results in long-lasting effects on motivation towards STEM careers when games include authentic practices of science, such as experimentation and modelling [40]. Implementation challenges have included increased development costs, extended time commitments, and the necessity for teacher training to implement game-based experiences effectively [41]. Furthermore, bad game designs (those that underemphasize pedagogy in favour of fun) can lead to engagement but no better learning gains [42]. The evidence indicates that GBL is more effective than gamification for cultivating intrinsic motivation and deep conceptual understanding despite these challenges.Comparative Studies: Review of Existing Research Contrasting the Two ApproachesAlthough there is now a significant amount of research on both gamification and GBL, little research directly compares their different effects on motivation within the same methodological frame [12]. The comparative literature is limited in both quantity and scope. In a study by de-Marcos and colleagues, the effectiveness of a gamified learning platform was tested against serious games in relation to information literacy instruction, finding that while the game produced better learning outcomes, a greater impression of perceived enjoyment was found with a gamified approach [43]. In contrast, a study by Su and Cheng found that elementary science learners exposed to game-based learning exhibited significantly greater learning motivation and self-efficacy than those who received gamified instruction [44]. These contradictory findings indicate that contextual factors such as age group, domain of study, and fidelity of implementation moderate the relative success of either approach. However, a recent systematic review by Li et al [12] identified only twelve empirical studies that directly compared gamification and game-based learning in any educational context, concluding that the evidence base is still too fragmentary to draw firm conclusions. The primary gaps in the research included: (a) a failure to differentiate between intrinsic and extrinsic motivational outcomes, (b) little use of studies including control groups with no-game treatment conditions to control for novelty effects, (c) few examining whether different subgroups of students (based on factors such as prior gaming experience or academic achievement) might benefit from approaches differently; and (d) qualitative data capturing students’ subjective experience with each approach was lacking [14]. Moreover, most comparative studies used pre-existing games or gamified platforms that vary on multiple other dimensions besides the central dichotomy of approach used, adding confounding variables that limit interpretability [45]. This research fills these gaps by creating interventions that hold content, length, and instructor attributes constant while systematically varying the game-informed strategy.Hypotheses DevelopmentFrom the theoretical framework and empirical literature discussed, we propose the following hypotheses. Firstly, regarding intrinsic motivation, GBL is theorised to fulfil the psychological needs for competence and relatedness due to its immersive and autonomy-supportive characteristics more effectively than gamification’s predominantly extrinsic mechanisms [5], [38]. Hence, the H1: Students in the game-based learning condition will show significantly higher levels of intrinsic motivation after interaction compared to students in the gamification condition, controlling for pre-test motivation scores. Second, in the context of extrinsic motivation and task engagement, it is anticipated that gamification’s reliance on tangible rewards, individual progress tracking, and social comparison mechanisms [8], [30] will create more robust short-term behavioural compliance. So, H2 would be that Gamification conditions students will show significantly greater extrinsic motivation and task completion than the GBL group. Third, qualitative judgements of enjoyment, relevance, and perceived learning are anticipated to be more favourable towards GBL based on its ability to situate STEM content within meaningful stories and real problem-solving situations [35], [40]. Therefore, H3: Students in the game-based learning condition will have more positive perceptions of enjoyment and relevance, and perceived learning in semi-structured interviews, compared to students in the gamification condition. Collectively, these hypotheses postulate a trade-off between the two: gamification may be optimal for short-term behavioral engagement while GBL is hypothesized to produce better outcomes for intrinsic motivation and meaningful learning experiences.Methodology This section describes the research design, participant characteristics, intervention conditions, instrumentation, procedures, and data analysis methods employed to address the research questions. The methodology is structured to ensure replicability and to support valid inferences regarding the differential effects of gamification and game-based learning on student motivation.Research DesignThis study uses a quasi-experimental, non-equivalent groups design with pre-test and post-test measures [46], [47]. Since random assignment of individual students is impossible in real school settings, intact eighth-grade science classes are randomly assigned to each of three conditions: gamification, game-based learning, and traditional instruction (control). This approach allows for the comparison of motivational outcomes while controlling for pre-existing differences using pre-test covariate adjustment [48]. A control group enables the separation of treatment effects from those due to confounding factors like maturation or history. The design is factorial (3 (condition) × 2 (time)), allowing for analysis of main effects, as well as interaction effects across conditions.ParticipantsDuring the fall of 2025, data were collected via semi-structured interviews with participants who were recruited from six eighth-grade science classes in a public middle school situated in an urban district in the Midwestern United States. The school is home to a diverse student body: 44 per cent White, 27 per cent Hispanic/Latino, 19 per cent African American, and 10 per cent Asian or multiracial. About 38 per cent of schoolchildren qualify for free or reduced-price lunch. All the students in six classes can join. Inclusion criteria include being enrolled in eighth-grade general science and demonstrating both student assent and parental consent. (Students with individualised education plans (IEPs) directing teachers to provide alternative science instruction are ineligible, but are excluded to prevent contamination of the intervention; similarly, students whose proficiency in English is so limited that it would affect responding on self-report instruments are ineligible.)The required sample size is determined via a priori power analysis using G*Power 3.1 [49]. Assuming f = 0.25, α= 0.05, and power = 0.80 for a three-group ANCOVA with one covariate, the total required sample size is N = 172. Target enrolment is 172 students, accounting for an expected 15 per cent attrition. These estimates meet the minimum per-class thresholds, comprising six classes with average sizes of 28–32 → 180–192 people. Demographic and background variables, such as prior science achievement (i.e., last semester grade) and prior gaming experience (self-report item), were collected at the pre-test stage.Intervention ConditionsAll three conditions address identical learning objectives aligned with state science standards for forces and motion (physical science). The instructional duration is four weeks, with equivalent instructional time per condition. Table I summarises the distinguishing features of each condition.Table 1. Comparison of intervention conditions.
Network pharmacology of furosemide in equines to understand its pharmacodynamic effec...
Arun Kumar

Arun Kumar

March 26, 2026
Objective: To investigate the network pharmacology of furosemide in horses and identify potential off-target interactions that could influence cardiac electrophysiology and arrhythmogenic risk. Procedures: Human protein targets of furosemide were predicted using SwissTargetPrediction and mapped to equine orthologs through gene name matching and UniProt verification. Molecular docking against all equine targets was performed using AutoDock Vina integrated with PrankWeb. High-affinity targets (affinity <125 µM) were subjected to protein-protein interaction network and molecular function enrichment analyses in STRING. Network enrichment analyses were performed for all targets of furosemide, its metabolites (furosemide glucuronide and saluamine), and targets shared between the parent drug and metabolites. Results: Nineteen high-affinity targets of furosemide were identified, including SLC5A2, CDK9, AURKA, MAP2K1, and multiple carbonic anhydrase isoforms. Network analysis highlighted clusters related to ion transport, kinase signalling, and zinc-dependent enzymatic activity. Carbonic anhydrase and zinc-binding functions were prominent in broader network and enrichment analyses, suggesting potential off-target effects of furosemide in cardiopulmonary tissues. Conclusions: Furosemide interacts with multiple equine proteins beyond its canonical renal target, including carbonic anhydrases and zinc-dependent enzymes. These interactions may influence cardiac acid-base homeostasis and electrophysiological stability, providing a plausible mechanism for arrhythmogenic risk and SCD in racehorses. Further in vitro and in vivo studies are warranted to validate these findings and guide safer therapeutic use in high-performance equines.
27DCT Structural Synthesis: Geodetic Nodal Pulse and Forensic Validation of Vertical...
Lee Holmes

Lee Holmes

March 30, 2026
This paper presents a topological structural analysis of the Missouri Snap geodetic gate (Node 18) utilizing the 27-Dimensional Chronetic Topology (27DCT). By applying the Holmes Law of Vertical Columnar Summation, we model the mechanical distribution of the 12.2 ZettaJoule (ZJ) energy imbalance across the Alpha, Beta, and Gamma rotations of the 1,080-node Straight-Screw Manifold. This analysis provides a unified reconciliation of the 3.3h Grid Lag and the 1.33ms LOD Jitter, establishing a 2.0nm-scale theoretical baseline within the model for planetary geodetic stabilization and forensic analysis of the solar-handshake sequence within the model.
Development of a GPCR-binding ligand screening method using real-time interaction ana...
Hadis Westin
Hanna Yushchyshyna

Hadis Westin

and 11 more

March 26, 2026
Background and Purpose: G protein-coupled receptors (GPCRs) affect many physiological processes, and their dysfunction is related to a wide range of diseases and are thus frequently used as drug targets and biomarkers. The present study focuses on the neurokinin 1 receptor (NK1-R) which is over-expressed on neural tumours with the aim to develop high affinity binders to NK1-R for theranostic purposes. 60 Ligands were synthesized, all containing iodine enabling isotopic exchange with radioactive iodine for imaging or radiotherapeutic applications. Experimental Approach: Embedded in the cellular membrane, GPCRs adopt various conformational states that can change upon interaction with agonistic or antagonistic ligands. To screen ligands, a time-resolved, live cell assay was developed to evaluate competition with fluorescently labelled endogenous NK1-R ligand, Substance P (FAM-SP). Key Results: The time resolved assay enabled quantification of the efficiency in which the ligands compete for the rapid and slow releasing NK1-R binding modes of FAM-SP. 25 Ligands were able to displace FAM-SP faster than the anti-nausea drug aprepitant. Two of these ligands displaced the rapid releasing FAM-SP fraction faster than Substance P (SP) and one of them also replaced the slow releasing fraction faster than aprepitant. To validate the assay, the affinity of two ligands for several NK1-R expressing cell lines was further quantified; one that had similar competition characteristics as SP and one that behaved as aprepitant. Conclusion and Implications: The ligand with similar competition characteristics as SP displayed a low nM to sub-pM affinity with a stronger binding correlating with higher receptor expression levels. This high affinity makes it a highly promising candidate for further development into a theranostic agent to address neural tumours.
Therapeutic Regulation of the Wnt/β-catenin Pathway by Traditional Chinese Medicine:...
Chaoyang Zhang
Yujie Yu

Chaoyang Zhang

and 6 more

March 26, 2026
Atherosclerosis (AS), recognized by the World Health Organization (WHO) as a critical global public health challenge, serves as the primary pathological basis of cardiovascular diseases. Its hallmark pathological features include lipid deposition in the vascular intima, formation of atheromatous plaques, and fibrous cap thickening. These changes reduce vascular compliance, cause luminal stenosis, and culminate in end-organ ischemia. The Wnt/β-catenin pathway, an evolutionarily conserved core transduction cascade, plays pivotal roles in the pathophysiology of diverse diseases by orchestrating critical biological processes such as cell proliferation, differentiation, and metabolic homeostasis. Recent studies have revealed its essential regulatory functions in multiple key stages of AS progression, including endothelial dysfunction, vascular inflammatory responses, and vascular remodeling. Traditional Chinese Medicine (TCM) demonstrates unique advantages in AS prevention and treatment through its holistic synergistic characteristics of ”multi-component, multi-target, multi-pathway” regulation. Current evidence indicates that active components of TCM and compound formulations can modulate the Wnt/β-catenin pathway to exert multiple biological effects, including anti-inflammatory and antioxidant actions, regulation of macrophage activation, inhibition of abnormal vascular smooth muscle cell proliferation and migration, and improvement of lipid metabolism disorders, thereby retarding AS progression. Therefore, this article analyzes the Wnt/β-catenin pathway’s molecular features and its regulatory role in AS progression, while reviewing recent TCM interventions targeting this pathway. By elucidating the Wnt/β-catenin pathway’s role, this study supports the rational design of clinical interventions and next-generation AS therapeutics. Additionally, the study discusses TCM toxicological principles in its conclusion, offering practical references to enhance TCM’s clinical efficacy in AS treatment.
Icaritin alleviates cerebral ischemia-reperfusion injury by reducing ferroptosis via...
Tianlun Wang
Wenjing Dai

Tianlun Wang

and 8 more

March 26, 2026
The narrow therapeutic window of reperfusion therapy highlight the urgent need for novel neuroprotective strategies against ischemic stroke, wherein ferroptosis is a key contributor to neuronal death in the ischemic penumbra. This study demonstrates that icaritin (ICT) treatment significantly improved functional outcomes, reduced infarct size,and enhanced neuronal viability in middle cerebral artery occlusion (MCAO) rats and oxygen-glucose deprivation and reoxygenation (OGD/R) PC-12 cells. Transcriptomic and biochemical analyses confirmed that ICT’s neuroprotection was associated with the suppression of ferroptosis. Mechanistically, ICT inhibited pathological mitochondrial permeability transition pore (mPTP) opening, thereby preserving mitochondrial membrane potential and attenuating reactive oxygen species (ROS) production. This action was essential, as the protective effect was mimicked by cyclosporin A (CsA) and abolished by lonidamine (LND). Molecular docking revealed stable binding of ICT to voltage-dependent anion channel (VDAC) and cyclophilin D (CypD.) In conclusion, ICT confers neuroprotection by inhibiting mPTP excessive opening, which in turn blocks the downstream ferroptosis cascade. This work elucidates a detailed mechanism for ICT and validates the “mPTP-ferroptosis” axis as a promising therapeutic target for ischemic stroke.
The Prevalence and Influencing Factors of Delayed Onset of Lactogenesis II: An overvi...
Dandan Lv
Sen Li

Dandan Lv

and 5 more

March 26, 2026
Background: Delayed onset of lactogenesis II (DOL II) is a critical barrier to successful breastfeeding, yet the quality and reliability of existing systematic reviews on its prevalence and influencing factors remain unclear. Objectives: To assess the methodological quality, reporting quality, and evidence certainty of systematic reviews on DOL II, and to synthesize the pooled prevalence and influencing factors. Search Strategy: A comprehensive search was conducted in 12 databases (including PubMed, Cochrane Library, Web of Science, Embase, Scopus, and Chinese databases) from inception to 14 December 2025. Selection Criteria: Systematic reviews, meta-analyses, or systematic reviews with meta-analysis that examined the incidence and/or influencing factors of DOL II in parturients were eligible. Narrative reviews, protocols, and conference abstracts were excluded. Main Results: Fourteen systematic reviews were included. The overall pooled prevalence of DOL II was 27% (95% CI: 24%-30%). Twenty-six influencing factors were identified; the most consistent risk factors were pre-pregnancy overweight/obesity (OR=1.80, 95% CI: 1.42-2.28), caesarean section (OR=1.42, 95% CI: 1.30-1.56), gestational diabetes mellitus (OR=1.58, 95% CI: 1.35-1.84), and primiparity (OR=2.09, 95% CI: 1.67-2.61). Methodological quality (AMSTAR 2) was predominantly low or critically low (79%), reporting quality was similarly limited, and the certainty of evidence (GRADE) for all outcomes was low or very low. Conclusions: DOL II affects about one in four parturients and is associated with several modifiable and non-modifiable risk factors. However, the current evidence base is constrained by the suboptimal methodological quality of existing systematic reviews and the very low certainty of the underlying evidence.
Preterm birth and subsequent pregnancies in Matlab, Bangladesh: a recurrent survival...
Wnurinham Silva
Monjur Rahman

Wnurinham Silva

and 9 more

March 26, 2026
Objective: To test the association between preterm-birth and time to occurrence of a subsequent pregnancy, compared with term-birth. Design: Prospective cohort study using health and demographic surveillance system data. Setting: Matlab area, Bangladesh (1990-2020). Population or Sample: 52,502 pregnancies from 24,559 women, excluding pregnancies with multiple fetuses and women whose first pregnancy occurred after 45 years of age. Methods: Main exposure was preterm-birth. Previous miscarriages and stillbirths served as secondary exposures. Associations were assessed using recurrent survival analysis.. Impact of infant mortality was assessed by including death of the previous child under one year of age as time-dependent variable. Hazard ratio (HR) of a subsequent pregnancy was calculated with 95% confidence intervals (CIs). Main Outcome Measures: time to occurrence of a subsequent pregnancy counted as the date of the previous event until the date of the next last menstrual period. Results: Women with a previous extremely-very and moderately preterm-birth had an increased likelihood for subsequent pregnancy, compared to women with a term-birth (Adj.HR: 1.22 (95% CI 1.08-1.39; 1.13 (1.04-1.23), respectively). After adjusting for death of the previous child HRs reduced to 0.97 (0.85-1.09) and 1.03 (0.95-1.12), suggesting that the increased HRs observed was explained by infant death. Previous stillbirth and miscarriage were associated with approximately three-fold likelihood of a subsequent pregnancy. Conclusions: In Matlab, women with a previous preterm-birth were more likely to get pregnant than women with a previous term-birth. However, after adjusting for infant mortality there was no difference in pregnancy rates between previous preterm or term-birth.
Effectiveness of Video-Assisted Childbirth Education on Postpartum Blues among Mother...
Amala John

Amala John

March 26, 2026
Objective To assess the effectiveness of video-assisted childbirth education on prevention of postpartum blues among mothers. Design Quantitative quasi-experimental study using a post-test only control group design. Setting Medical College Hospital, Kottayam. Population Antenatal mothers who completed 37 weeks of gestation and were at high risk for postpartum blues. A total of 60 mothers were selected (30 control group, 30 experimental group) using non-probability purposive sampling. Methods Out of 164 mothers screened, 60 at-risk mothers were selected. The experimental group received a 45-minute video-assisted childbirth education session, while the control group received routine care. Postpartum blues was assessed on the 3rd–5th day using the modified Stein Maternity Blues Assessment Scale. Data were analyzed using descriptive and inferential statistics (Chi-square and Mann Whitney U test). Main Outcome Measures Severity of postpartum blues measured using the modified Stein Maternity Blues Assessment Scale. Results Among screened mothers, 36.4% were at risk. Mild postpartum blues was observed in 26.67% of the control group and 10% of the experimental group. A statistically significant reduction in postpartum blues was observed in the experimental group. Significant associations were found between postpartum blues and variables such as maternal age, type of family, support system, mode of delivery, gender of newborn, maternal and neonatal health status. Conclusions Video-assisted childbirth education is effective in reducing postpartum blues among at-risk mothers and can be incorporated into routine antenatal care. Funding All expenses related to the study were borne by the investigator. Keywords: Effectiveness, video assisted childbirth education, postpartum blues
Letter to the Editor Regarding “Safety of Short-Term Trimethoprim–Sulfamethoxazole Us...
Toshiki Fukasawa
Hisashi Urushihara

Toshiki Fukasawa

and 3 more

March 26, 2026
Title:Letter to the Editor Regarding “Safety of Short-Term Trimethoprim–Sulfamethoxazole Use for Uncomplicated Cystitis: A Nationwide Retrospective Cohort Study”
← Previous 1 2 … 5 6 7 8 9 10 11 12 13 … 2754 2755 Next →

| Powered by Authorea.com

  • Home