AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,105 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Neuro-Symbolic Geospatial Intelligence: A Framework for Understanding Nature-Related...
Rishaank Gupta

Rishaank Gupta

March 20, 2026
Nature-related financial risks are increasingly central to global reporting frameworks, yet most small and informal businesses in developing countries remain invisible to existing data systems. Because these enterprises lack formal records and defined map locations, their environmental impact cannot be assessed or incorporated into large-scale risk models. Current AI and satellite-based methods are insufficient for this task: purely connectionist approaches require large labeled datasets that do not exist for informal industrial settings, while language models lack the structured reasoning required for financial compliance. This paper proposes a neuro-symbolic framework in which satellite imagery is combined with a logic-based industrial knowledge graph to infer the activity types and environmental risk profiles of informal enterprises in unmapped urban areas. The approach leverages symbolic rules to compensate for data scarcity while producing transparent, auditable reasoning traces suitable for TNFD LEAP compliance. A secondary application is identified: the same spatial analysis that locates industrial clusters for financial disclosure purposes simultaneously maps environmental exposure zones relevant to maternal health surveillance in climate-vulnerable cities. Rather than presenting experimental results, this work defines the research gap, proposes an operational framework architecture, and outlines a three-phase research agenda
The Dual Role of Extracellular Vesicles in Atherosclerosis: From Pathogenic Mediators...
Zheng Li
Min Wang

Zheng Li

and 10 more

March 18, 2026
Atherosclerosis (AS), the leading cause of cardiovascular diseases, demands innovative translational approaches. This review critically examines the dual role of extracellular vesicles (EVs) as key pathological mediators and emerging clinical tools in AS, while acknowledging current limitations in clinical validation, standardization challenges, and regulatory pathways. We systematically analyze how EVs dynamically regulate disease progression across all stages from endothelial dysfunction to plaque rupture by delivering specific biomolecules (e.g., miRNAs, cytokines, proteins, lipids) that modulate endothelial integrity, foam cell formation, and vascular smooth muscle cell phenotype. Critically, we distinguish between established mechanistic insights and preliminary translational findings, highlighting gaps between preclinical promise and clinical reality. We evaluate circulating EVs as potential non-invasive diagnostic and prognostic biomarkers—emphasizing the need for standardized pre-analytical protocols and large-scale prospective validation—and assess engineered EVs as novel targeted therapeutic delivery vehicles, addressing manufacturing scalability, off-target effects, and immunogenicity concerns. By integrating pathophysiological mechanisms with diagnostic and therapeutic applications, this review provides a realistic translational roadmap, positioning EVs at the forefront of advancing precision medicine in atherosclerosis management, while outlining critical milestones required for clinical implementation.
A 17.7--21.2 GHz InP DHBT Cascode Power Amplifier with Adaptive-Bias Linearization
Qindan Cheng
Guangchao Zhou

Qindan Cheng

and 5 more

March 18, 2026
This paper presents a K-band high-linearity power amplifier (PA) implemented in a 0.7-μm InP DHBT process. Operating from 17.7 to 21.2 GHz, the PA employs a cascode topology with an adaptive mixed-bias network (current mirror and resistive divider) to stabilize the operating point and enhance linearity. The unit-cell PA achieves 19-20 dBm output power, 54% PAE, 22.5-26.1 dB gain, and VSWR <1.3. At 3-dB back-off, it exhibits 39% efficiency and IMD3 < -37 dBc. A four-cell combined PA delivers 23-23.5 dBm output power, 44-47% PAE, 20.1-21.5 dB gain, and VSWR <1.5, with IMD3 < -37 dBc at 3-dB back-off. The design offers an excellent efficiency-linearity trade-off, making it suitable for 5G/6G and satellite communication systems.
Design and implementation of an efficient asset management system based on RFID
Xinyuan Shi
Qian  Zhuang

Xinyuan Shi

and 3 more

March 18, 2026
As enterprises continue to expand and their businesses become increasingly complex, asset management becomes increasingly important. Traditional asset management methods suffer from low efficiency and poor accuracy. RFID technology can achieve real-time tracking and monitoring of assets, improve data accuracy and management efficiency. The key to the efficient management of a large number of assets by RFID systems lies in the design of anti-collision algorithms within the reader/writer. In order to solve the multi-tag collision problem, we designed a new RFID anti-collision algorithm. We proposed the MBQ-DR (Multi Bit Mapping Query-Dual Response) QT protocol with the idea of efficiently reducing redundant queries and efficiently handling single-bit collisions. This protocol uses multi-bit mapping queries to completely eliminate idle time slots, and uses a dynamic query update mechanism to effectively reduce redundant queries, improve system efficiency, and reduce communication load. The dual response mechanism is introduced to further reduce the total number of queries, further improve system efficiency, and shorten the total recognition time.
Impact of the methylphenidate shortage on the dispensing of medicines for Attention D...
George Gadalla
Wern Chai

George Gadalla

and 3 more

March 18, 2026
Purpose: To describe nationwide dispensing patterns of methylphenidate in Australia before and during recent prolonged supply shortages, and to assess the extent to which Section 19A (S19A) overseas substitutes contributed to population-level utilisation during the shortage period. Secondary aims were to compare methylphenidate dispensing with other ADHD medicines, examine changes in dispensing across individual strengths and formulations, and quantify delays between S19A approval and Pharmaceutical Benefits Scheme (PBS) listing. Methods: A population-based cohort study was conducted using publicly available monthly dispensing data from the Pharmaceutical Benefits Scheme (PBS) and Repatriation PBS (January 2020–October 2025). Data were standardised to dispensings and Defined Daily Doses (DDD) per 100,000 population. Methylphenidate shortages were identified using the Therapeutic Goods Administration Medicines Shortages database. S19A approvals, PBS listing dates and lapse dates were obtained from the S19A approvals database to determine delays to subsidised access. Results: Dispensing of ADHD medicines increased substantially from 2020 to 2025. Methylphenidate dispensing increased overall but plateaued from late 2024 onward, coinciding with widespread supply disruptions affecting multiple strengths and formulations. DDD analysis demonstrated abrupt reductions in extended‑release methylphenidate strengths (particularly 36 mg and 54 mg) during shortages, with partial compensatory increases in other strengths. Fifteen overseas-registered methylphenidate products received S19A approval; however, time to PBS listing ranged from 83–166 days (median 162 days). After PBS listing, S19A products contributed to <1.5% of all methylphenidate dispensings between July–October 2025. Conclusion: Methylphenidate dispensing in Australia became unstable during extensive supply shortages from late 2024, with incomplete substitution across strengths and modest increases in alternative ADHD medicines. Although numerous S19A products were approved, there was minimal uptake of these products at the population level. As ADHD medicine utilisation continues to rise, strengthened national strategies to improve supply chain resilience, transparency and responsiveness are needed.
Disproportionality Analysis of Dietary Supplement Adverse Events in the FDA CAERS Dat...
Hayden Farquhar

Hayden Farquhar

March 18, 2026
Purpose: Dietary supplements are consumed by over half of US adults, yet post-market safety surveillance remains limited. The FDA’s CFSAN Adverse Event Reporting System (CAERS) contains over 230,000 adverse event reports but has been the subject of only one prior computational signal detection analysis. We applied four disproportionality methods, temporal signal detection, and demographic stratification to the expanded CAERS dataset (2004–2025) to identify and validate safety signals for dietary supplements. Methods: We analysed 48,840 unique adverse event reports from the FDA CAERS database. Four disproportionality methods — PRR, ROR, Gamma-Poisson Shrinker (GPS), and BCPNN — were applied to 4,779 product–PT pairs with N ≥ 3. Cumulative sum (CUSUM) control charts provided temporal signal detection. Demographic stratification by age, sex, and product category used Woolf tests for odds ratio homogeneity. Twenty-four validation and sensitivity analyses assessed signal robustness, including multiple testing correction, positive/negative control validation, bootstrap false discovery estimation, and cross-validation against published international signals. Results: We identified 3,017 robust product-name-level signals detected by three or more methods, of which 2,146 were detected by all four methods. Kratom occupied 7 of the top 10 positions by composite risk score, reflecting the breadth of its multi-organ toxicity rather than independent signals, with Kratom–Death ranked highest (PRR 19.7, N = 178). Hepatotoxicity signals clustered in herbal/botanical and weight loss products. Herbal/botanical supplements carried the highest serious outcome rate (78.0%; adjusted OR 2.08, 95% CI 1.53–2.84). CUSUM detected 451 temporally emerging signals (14.9%), including preliminary hepatic enzyme signals for AG1 and Nutrafol (2023–2025). Of 3,017 robust signals, 97.3% survived false discovery rate correction (α = 0.05); 17 of 21 (81%) established international safety signals were recovered. A total of 148 signals were classified as critical tier by composite risk scoring. Conclusions: This computational pharmacovigilance analysis of dietary supplements identified over 3,000 robust product-name-level safety signals across multiple analytic dimensions; the true number of distinct ingredient-level safety concerns is lower due to product name fragmentation. The results support regulatory prioritisation of herbal/botanical and weight loss supplement categories and identify preliminary emerging signals warranting continued monitoring.
Hierarchical climatic forcing and population rhythms of bark and ambrosia beetles in...
Evahtira Gunggot
Roger Beaver

Evahtira Gunggot

and 7 more

March 18, 2026
This study investigated the temporal dynamics and environmental drivers of bark and ambrosia beetles (Curculionidae: Scolytinae and Platypodinae) in a tropical rainforest in Northern Borneo. Utilizing an extensive long-term dataset (2017–2020), we employed Multi-Taper Method (MTM) spectral analysis and lagged Path Analysis to decode the underlying structure of population fluctuations. To isolate true cyclical relationships and ensure statistical rigor, both climatic variables and insect capture data were rigorously detrended prior to the path analysis.Our results revealed a multi-scale periodicity, most notably a dual-significant 35-day cycle in Total Trap Capture (TTC). We propose that these short-term oscillations are fundamentally rooted in intrinsic generation cycles reflecting the developmental duration of individual insect species. Within this framework, the 35-day MJO acts primarily as a ”gatekeeper” rather than the sole driver, periodically providing resources through windthrow while modulating flight activity through rainfall inhibition. These dynamics are further governed by a hierarchy of ”ecological memory,” where climatic forcing at specific lags determines realized abundance. We identified two distinct sub-annual biological legacy tiers: a ~3-month lag representing a host-stress response pathway , and an ~8-month lag driven by the accumulation of multivoltine generations and the obligate incubation period required for fungal symbionts to degrade wood substrates.Based on these findings, we propose the Resonance Hypothesis, suggesting that intrinsic rhythms are phase-locked by intraseasonal pulses and modulated by multi-month biological legacies. These cycles are ultimately nested within supra-annual climatic modes, such as ENSO and the IOD, which dictate long-term population baselines. Our results suggest that the apparent stochasticity of tropical insects is an interference pattern created by these overlapping temporal rhythms.
Dual modification strategy improves the adsorption efficiency of corn straw hydrochar...
sinan Wang
Xin Li

sinan Wang

and 15 more

March 18, 2026
Search for environment-friendly phosphate adsorption materials in polluted water to realize the high-value utilization of agricultural wastes corn straw under mild hydrothermal conditions, the effects of KOH and FeCl3 modification on the corn straw hydrochar were investigated with corn straw as raw material. And Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), XRD, and XPS were used to characterize the surface functional groups, structure, element content, and morphology of the modified hydrochar. The adsorption mechanism of phosphate in water was explored through adsorption kinetics and adsorption thermodynamics tests. The results showed that the adsorption kinetics of phosphate on the modified hydrochar conformed to the quasi-second order kinetic equation (R2>0.95, P ≤ 0.05), and the adsorption thermodynamics conformed to the Langmuir equation (R2 ≥ 0.94, P ≤ 0.05). The adsorption of phosphate was a spontaneous endothermic reaction (ΔGθ<0, ΔHθ>0) and monolayer adsorption and controlled by rapid reaction. Both FeCl3 and KOH modified hydrochar can improve the ability to adsorb phosphate, and the adsorption mechanism was different. The main reason of FeCl3 modified hydrochar can adsorb phosphate was that it has good electrostatic attraction. After KOH modification, phosphate adsorption mainly depended on large specific surface area and ion exchange. The corn straw hydrochar modified with FeCl3 had a large adsorption capacity for phosphate, and the maximum adsorption capacity at 45 ℃ was 2.25 mg/g, which can be used as a potential adsorption material for phosphate in polluted water.
Genomic Variation Landscape and Population Genetic Analysis of Camellia longistyla Ba...
Fengchan Wu
Binyang Zhao

Fengchan Wu

and 6 more

March 18, 2026
Camellia longistyla is an endemic species to Guizhou Province, possessing significant ornamental value and potential economic value for oil production.To elucidate its genetic background and guide scientific conservation and sustainable utilization, this study employed whole-genome resequencing for the first time on 98 individuals from three natural populations in Chishui City and Leishan County. Through high-quality SNP markers, we systematically analyzed its genomic variation characteristics, population genetic structure, and genetic diversity levels.A total of 61,370,270 high-quality SNPs were identified, which were uniformly distributed across chromosomes but with distinct variation hotspot regions. Population structure analysis clearly divided all samples into two main subpopulations (Pop60 and Pop38), perfectly corresponding to their geographical origins, indicating that geographical isolation is the key factor driving population differentiation. Genetic diversity assessment indicated moderately low overall genetic diversity, with significant imbalance between the two subpopulations: the larger Chishui population (Pop60) exhibited lower genetic diversity than the Leishan population (Pop38), suggesting a higher risk of genetic diversity loss in the former. This study provides the first genome-level insights into the genetic structure and diversity status of C.longistyla, offering crucial scientific evidence for formulating differentiated conservation strategies, identifying priority conservation units, and future germplasm innovation and breeding research.
The influence of gender and professional background on the accuracy of visual blood l...
Maximilian Niederer
Mathias Bader

Maximilian Niederer

and 7 more

March 18, 2026
Objective: Accurate visual estimation of blood loss is critical for early recognition of obstetric haemorrhage. Despite its widespread use, visual estimation is prone to substantial bias. While professional experience has been shown to influence estimation accuracy, the potential contribution of gender-associated visual perceptual differences remains insufficiently explored . Design: We carried out a prospective observational simulation-based study at a tertiary university medical centre in Graz, Austria Setting/Sample: 50 physicians (28 females/22 males) were recruited from anaesthesiology and obstetrics. Eligibility required at least three months of clinical experience in obstetrics or obstetric anaesthesia. Clinicians with known colour-vision deficiency were excluded. Methods: All participants visually estimated blood loss in four simulated obstetric scenarios. The blood volumes and haemoglobin concentrations were verified by volume measurement and point of care testing. Each scenario was viewed individually under standardized conditions without access to physiological or contextual clinical information. Main Outcome Measures The primary outcome was absolute estimation error (mL) according to gender or professional background of participants. Secondary outcomes included scenario-specific accuracy and the association between self-rated confidence and estimation accuracy. Results: Women outperformed men in low and moderate volume scenarios. Across all scenarios the overall difference median absolute estimation error did not reach statistical significance. Professional background showed a stronger effect than gender: gynaecologists were significantly more accurate than anaesthetists across most scenarios (p < 0.001). Conclusions: Visual blood loss estimation accuracy in obstetric simulations is influenced by both gender and professional background. Gender-related differences appear volume-dependent, whereas professional experience exerts a consistent influence.
Immune-Mediated Changes in Conscious Experience: A Computational Model of Cytokine-Dr...
Aditi Doddavaram

Aditi Doddavaram

March 18, 2026
The sense of a unified minimal self is a dynamical achievement of the brain, arising from the precise coordination of predictive processing and E/I interactions. While the phenomenological shifts of sickness response are well-documented, a unified computational mechanism linking sickness behaviour and destabilization of consciousness does not yet exist. This paper proposes that systemic immune response leading to inflammation acts as a global uncertainty signal that disrupts the self-model by leading to reduced precision and increased autocorrelation, specifically, that pro-inflammatory cytokines interfere with GABA-ergic signalling and interoceptive inference. To test this hypothesis, we employ a Wilson-Cowan neural mass model with an added cytokine parameter and test it under different cytokine loads to observe the changes in E/I dynamics. The model illustrated shallower attractor basins, characterised by increased temporal autocorrelation and under-damped recovery from stochastic perturbations. This model illustrates how immune-modulated inhibition could produce dynamical regimes consistent with temporal blurring observed during sickness. By reframing sickness behaviour as a reactive reconfiguration of self-model stability, this paper helps to distinguish modulatory immune effects from structural neuropathology and helps with an understanding of dissociative states in acute and chronic inflammatory conditions.
Comparison of two groups in primary dysmenorrhea: serum endocan, procalcitonin, malon...
Sibel EJDER TEKGUNDUZ
Serap EJDER APAY

Sibel EJDER TEKGUNDUZ

and 3 more

March 18, 2026
Objectives: This study aimed to investigate the relationships between inflammatory, oxidative, and endothelial biomarkers—specifically serum endocan, procalcitonin, malondialdehyde and total antioxidant status —and without and with primary dysmenorrhea. Desing: Prospective cohort study. Setting: Women aged 18–30 years with moderate to severe primary dysmenorrhea and healthy controls without dysmenorrhea were enrolled. Population or Sample: A total of 42 women with primary dysmenorrhea and 42 healthy controls were included in the study. Methods: All participants underwent pelvic ultrasonography on the first day of menstruation. Pain severity was assessed using the visual analog scale. Venous blood samples were collected on the same day, and serum endocan, procalcitonin, malondialdehyde, and total antioxidant status levels were measured using ELISA. Clinical and laboratory variables were compared between groups, and univariate binary logistic regression analysis was performed to identify factors associated with dysmenorrhea. Main Outcome Measures: Serum MDA and TAS levels of the groups Results: Women with primary dysmenorrhea had significantly higher serum procalcitonin and endocan levels compared with controls (p < 0.05). No significant differences were observed in MDA or TAS levels between groups. Univariate logistic regression analysis revealed that serum procalcitonin, endocan, menstrual cycle length, follicle-stimulating hormone, and dehydroepiandrosterone sulfate were significantly associated with the presence of dysmenorrhea. TAS levels showed an inverse correlation with pain severity.
Population genetic structure and biogeographic distribution of tropical Halodule unin...
ANGELA GRACE  SINGSON
Koji Takayama

ANGELA GRACE SINGSON

and 10 more

March 18, 2026
Halodule uninervis plays a critical role in the seagrass ecosystem with its opportunistic traits, which facilitate rapid expansion and persistence in disturbed and changing environmental conditions. Yet the population genetic dynamics and connectivity of this species, particularly in the Philippines, remain poorly understood. Using genome-wide single-nucleotide polymorphisms (SNPs) generated from MIG-seq data, we assessed patterns of clonal reproduction, genetic differentiation, isolation by distance, and recent migration across ten populations in a seascape characterized by complex oceanographic circulation linking the Visayas and Mindanao. Clone detection revealed pronounced spatial variation ranging from predominantly sexual populations with high genotypic richness to strongly clonal populations dominated by a few multilocus genotypes. Genetic diversity was generally low to moderate, consistent with seagrass life-history traits, but the outgroup population exhibited high nucleotide diversity and divergent genotypes, suggesting long-term persistence of distinct lineages. Population structure analyses showed weak but detectable genetic structuring, with overlapping genetic clusters and heterogeneous ancestry profiles indicating regional connectivity. Genetic differentiation among populations was low to moderate and not significantly associated with geographic distance, highlighting the importance of oceanographic processes over spatial proximity. Contemporary migration analyses revealed high self-recruitment across most populations, coupled with asymmetric gene flow, indicating that Mambajao is a regional convergence or sink population. These findings demonstrate that H. uninervis populations in Visayas and Mindanao form a semi-connected metapopulation influenced by clonal reproduction, selective dispersal, and complex circulation patterns. Incorporating genetic connectivity into a network-based restoration and management is essential for enhancing seagrass resilience under ongoing environmental change.
(1)Title: Impact of a Public Health–Based Etonogestrel Implant Contraceptive Interven...
Gisele Marquini
Bárbara Cunha Mello Lazarini Antonioli

Gisele Marquini

and 5 more

March 18, 2026
Objective: World medical societies recommend Long-Acting Reversible Contraception (LARC): high efficacy and positive impact on maternal mortality as a complication of unplanned pregnancy.The objective was to evaluate the maternal mortality rate resulting from complications of unplanned pregnancies, in a public health system, before and after the availability of a LARC (subcutaneous etonogestrel implant). Methods: Statistical analysis using independent tests (Student’s t; Cohen’s d) compared maternal mortality rates resulting from complications of unplanned pregnancies, at menacme and in adolescents, in the periods BEFORE and AFTER the intervention with the etonogestrel subcutaneous implant, in a public health system. Results: Data demonstrated a statistically significant reduction in pregnancy in this population (p=0.042; p=0.003), with practically unchanged data on the maternal mortality rate. Conclusion: Maternal mortality rates may not present statistically significant differences after the availability of a LARC due to adequate assistance for pregnancy complications. However, the subcutaneous implant of etonogestrel has a positive impact on reducing the birth rate at reproductive age, and especially in adolescents, when made widely available by public health systems, which can result in control of maternal mortality resulting from complications of unplanned pregnancies.
Distinguishing Iatrogenic Overload from Intrinsic Myocardial Injury Following Postpar...
Mina Iyaye
Anupa Parajuli

Mina Iyaye

and 3 more

March 18, 2026
Title: Distinguishing Iatrogenic Overload from Intrinsic Myocardial Injury Following Postpartum Hemorrhage
Gut Microbiome Dysbiosis in Early Pregnancy as a Predictive Marker for Rare Placental...
Yashfeen Talpur

Yashfeen Talpur

March 18, 2026
Title pageTitle: Gut Microbiome Dysbiosis in Early Pregnancy as a Predictive Marker for Rare Placental Insufficiency Syndromes.Article type: Letter to editor.
Thinking Cybersecurity PART IV The Road Ahead - CHAPTER 11 From Models to Markets: In...
Paulo H. Leocadio

Paulo H. Leocadio

March 18, 2026
Chapter 11From Models to Markets: Industrializing Cybersecurity AIPaulo H. LeocadioIntroductionArtificial intelligence in cybersecurity has advanced beyond initial testing. The last decade has been marked by architectural innovations (transformers, large language models, and diffusion-based generative systems) that have demonstrated unmatched capabilities in pattern recognition, anomaly detection, and language-driven automation. However, capability alone does not make a technology ready for industry. The key question for organizations today is not whether models can perform, but whether they can operate reliably, securely, and economically in production environments.The transition from research artifact to deployable system represents a structural inflection point. Early cybersecurity AI implementations relied heavily on centralized cloud inference, opaque vendor APIs, and resource-intensive model stacks. While these approaches accelerated experimentation, they introduced architectural fragility: latency dependencies, cost volatility, governance ambiguity, and strategic reliance on proprietary platforms. Industrialization demands a different posture, one centered on compression, controllability, auditability, and infrastructural integration.Three converging trends define this new phase. First, model efficiency engineering has shifted from being a luxury to an operational necessity. Techniques such as quantization, pruning, and knowledge distillation enable large language models to operate within constrained compute environments. Compression is no longer just a cost-saving tactic; it acts as the key mechanism for distributed deployment across enterprise networks and edge devices.Second, edge and on-device inference reshape the landscape of trust. Instead of routing all telemetry through centralized cloud services, detection and behavioral modeling increasingly happen at the data source. In cybersecurity situations where milliseconds count, this architectural decentralization boosts resilience, privacy, and operational continuity.Third, the emergence of embedded agentic systems reframes AI not as a passive analytical tool but as an active participant in security control loops. Contemporary platforms integrate large language models with policy engines, workflow orchestration, and constrained execution layers. Industrial viability, however, depends on governance scaffolding (interpretability, audit logging, and bounded autonomy) (Doshi-Velez and Kim 2017, NIST 2023).The next three years will likely determine whether cybersecurity AI becomes foundational infrastructure or remains an experimental overlay. Market viability will hinge on reconciling performance with policy constraints, autonomy with oversight, and scalability with cost discipline. The competitive frontier will not be defined by parameter count alone, but by architectural stability and system-level coherence.This chapter examines that transition. It explores engineering strategies that translate model capability into production resilience, the trade-offs between closed API ecosystems and open-source sovereignty, and the infrastructural patterns likely to shape cybersecurity AI markets in the near future. The goal is not speculative futurism but structured anticipation rooted in observable technical trajectories.Industrialization, in this context, is not the end of innovation, but the beginning of infrastructure.11.1 From capability to operational constraintArtificial intelligence systems in cybersecurity have achieved significant progress in pattern recognition, anomaly detection, and language-based reasoning. Transformer architectures and the broader foundation model approach demonstrate that large-scale pretraining can encode generalized behavioral representations. In controlled environments, these systems reach high benchmark scores in classification, summarization, and threat analysis tasks.However, benchmark capability does not equal industrial viability. Production cybersecurity environments impose constraints that academic evaluations often overlook. These include latency budgets measured in milliseconds, strict data locality requirements, deterministic audit trails, adversarial robustness, and predictable cost ranges. A model that performs well under unrestricted GPU resources may become impractical when deployed across distributed endpoints or high-volume telemetry streams.Industrialization, therefore, begins with the recognition of constraints.Three operational forces redefine how AI systems must be engineered for market deployment:Compute efficiency: Models must operate within bounded hardware budgets.Control & governance : Outputs must be explainable, auditable, and policy aligned.Deployment topology : Inference must align with architectural realities: cloud, hybrid, or edge.This shift changes model design priorities. During research, scaling laws encouraged increasing parameters and data. In production, the focus shifts to minimizing memory use, reducing inference latency, and ensuring stable behavior under adversarial conditions.The transition is not gradual. It is architectural. Cybersecurity AI must shift from centralized experimentation platforms to distributed control-plane components integrated within enterprise infrastructure. Instead of functioning as isolated analytical services, models increasingly operate as nodes within structured decision loops—collecting telemetry, producing probabilistic evaluations, interacting with policy engines, and escalating bounded actions.Industrialization, therefore, requires redefining AI systems as infrastructural subsystems rather than just tools. Performance remains essential but is no longer enough. Viability depends on controllability, portability, and economic sustainability.The sections that follow examine how compression techniques, edge-native deployment models, and embedded agentic architectures collectively redefine the operationalization of cybersecurity AI at market scale.11.2 Efficiency engineering and model compressionResource limitations primarily constrain the industrial deployment of large language models in cybersecurity environments. Memory footprint, inference latency, bandwidth consumption, and energy utilization define practical limits long before theoretical performance ceilings are reached. Efficiency engineering is therefore not an optimization afterthought but a prerequisite for viability.Early scaling paradigms prioritized parameter growth as the primary driver of performance gains. Empirical scaling laws suggested monotonic improvements as model size and data volume increased. However, production cybersecurity systems operate within heterogeneous hardware environments: edge devices, virtualized cloud instances, containerized clusters, and occasionally air-gapped infrastructure, where unconstrained scaling is economically and operationally untenable.Model compression addresses this tension by reducing computational and memory requirements while preserving functionality.11.2.1 QuantizationQuantization reduces the numerical precision of model weights and activations from 32-bit floating-point representations to lower-bit-width formats (e.g., 16-bit, 8-bit, 4-bit). Post-training quantization techniques such as GPTQ (Frantar, et al. 2023) and low-bit fine-tuning strategies such as QLoRA (Dettmers, et al. 2023) demonstrate that large transformer models can retain competitive performance under aggressive precision reduction.From an industrial perspective, quantization yields:Reduced VRAM requirementsIncreased inference throughputLower hardware cost thresholdsFeasibility of on-device executionIn cybersecurity pipelines processing high-volume telemetry, these gains translate directly into operational scalability. A quantized model deployed across distributed endpoints may outperform a centralized high-precision model once network latency and concurrency are factored into system-wide evaluation.Quantization thus shifts optimization from model-centric metrics to system-level performance.The industrialization of cybersecurity AI is best understood as a structural reorientation. Early deployments prioritized model capability and centralized inference. Mature deployments prioritize control, distribution, and governance. Figure 11. 1 illustrates this transition from model-centric design toward system-centric architecture.
Thinking Cybersecurity PART III - CHAPTER 10 Can You Run Diffusers Here? Exploring Op...
Paulo H. Leocadio

Paulo H. Leocadio

March 18, 2026
Chapter 10Can You Run Diffusers Here? Exploring Open AI Frameworks across Cloud Platforms Beyond SageMakerPaulo H LeocadioIntroductionFrom models to markets, autonomous cybersecurity systems do not fail because models are inaccurate; they fail because deployment architectures are fragile. The transition from experimental pipelines to operational infrastructure requires compression strategies, container discipline, hardware awareness, governance hooks, and repeatableMLOps 11Machine Learning Model Operationalization Management scaffolding. This chapter examines that transition.Throughout this book, Hugging Face Diffusers and open transformer-based models are described as modular, composable primitives. However, primitives alone do not form infrastructure. Infrastructure appears when models are integrated into training pipelines, monitored at inference endpoints, containerized within orchestration layers, and managed within execution environments. Industrialization, therefore, is not about increasing scale; it is about enhancing structural control.At the center of this operationalization sits a pragmatic stack:Hugging Face Diffusers (HFD) for modular generative and embedding pipelinesPyTorch as the computational substrateAmazon SageMaker as the managed orchestration and deployment layerPyTorch provides a tensor-level execution model that enables pruning, quantization, distributed training, and scheduler manipulation. Every equation introduced earlier in this book (risk curves, latency functions, reward alignment constraints) ultimately resolves into tensor operations and gradient updates. Industrial AI must therefore understand its computational substrate (NIST 2026).Amazon SageMaker transforms these tensor computations into governed, scalable systems. It enables:Distributed training jobsManaged endpointsModel registry and versioningMonitoring and drift detectionSecure container deploymentWhen Diffusers pipelines are deployed through SageMaker with PyTorch-backed containers, they transition from research artifacts to operational control systems.This chapter expands that core stack along three industrial dimensions:Compression and EfficiencyQuantization, pruning, and distillation techniques that enable edge and low-latency deployment without sacrificing structural reliability.On-Device and Embedded InferenceMoving intelligence closer to sensors, endpoints, and adversarial surfaces.Agentic Orchestration at ScaleCoordinating model inference, memory layers, and policy gates across distributed environments.Industrialization also creates a tension between openness and control. Open-source diffusion models enable composability and sovereignty. Managed AI Application Programmer Interfaces (APIs ) offer abstraction and speed. The choice of architecture is strategic, not ideological. Cybersecurity systems require auditability, deterministic fallback behaviors, and containment boundaries. These constraints influence how generative models are deployed.While earlier chapters emphasized theoretical rigor and mathematical grounding, this chapter focuses on deployment discipline. Each architectural pattern discussed will be connected to minimal implementation frameworks using PyTorch and SageMaker, ensuring that theoretical concepts stay practically grounded.By the end of this chapter, readers will understand not only how to run diffusion-based cybersecurity pipelines but also how to stabilize, compress, monitor, and govern them in production environments.Industrialization, in this context, is not the scaling of intelligence; it is the stabilization of autonomy.10.1 Revisiting the canonical foundationEvaluating alternative cloud environments requires a clear point of reference. Before examining whether Hugging Face Diffusers and open Transformer-based workflows can be effectively deployed outside Amazon SageMaker, we must restate the architectural foundation on which this book is built.The preceding chapters developed mathematical models, control-plane abstractions, and deployment constraints. However, those constructs were never intended to exist independently of infrastructure. They were formulated within a specific operational context that now warrants explicit articulation. Figure 10.1 shows the AWS architecture stack.
Thinking Cybersecurity PART III - CHAPTER 8 Platforms, Privacy, Vaults, Execution Sur...
Paulo H. Leocadio

Paulo H. Leocadio

March 18, 2026
Chapter 8Platforms, Privacy, Vaults, Execution SurfacesPaulo H LeocadioIntroductionAs artificial intelligence systems transition from passive analytical tools to agentic participants in cybersecurity operations, the primary source of risk shifts from model capability to execution context. The question is no longer whether an AI system can reason effectively about a threat, but where, how, and under what constraints that reasoning can materialize into action.Contemporary discourse on agentic AI concentrates on model architecture (e.g., Transformers, diffusion pipelines, reinforcement learning, or prompt chaining) while treating the underlying platform as a neutral substrate. This assumption is incorrect. Platforms encode governance decisions through identity boundaries, execution privileges, persistence models, and observability constraints. These properties ultimately determine whether autonomy remains bounded or escalates into systemic exposure.This chapter reframes platforms, privacy mechanisms, secret vaults, and execution surfaces as control infrastructure rather than auxiliary services. In operational cybersecurity environments, these elements function as the final authority layer between cognition and consequence. Models may generate hypotheses, plans, or recommended actions, but platforms determine what can be executed, with what authority, for how long, and under what auditability guarantees.Privacy, in this context, is not merely a compliance obligation. It is a stability requirement. Persistent context accumulation, uncontrolled memory retention, and opaque data reuse introduce feedback loops that amplify misalignment and undermine forensic accountability. Systems that cannot enforce forgetting cannot be governed reliably, regardless of model sophistication.Similarly, vaults and secrets management systems must be understood as trust boundaries, not storage conveniences. By externalizing authority (e.g., credentials, signing keys, privileged tokens) away from cognitive components, architectures preserve a critical separation between reasoning and power. This separation enables revocation, replay, and audit, even under adversarial or degraded conditions.Execution surfaces represent the narrow interfaces where intent becomes impact. Poorly defined or overly permissive execution paths create nonlinear risk escalation, particularly in automated or semi-automated response scenarios. Constraining these surfaces through simulation, scope limitation, and policy mediation is essential to preventing cascading failure.The central argument of this chapter is that safe agentic behavior does not emerge from smarter models, but from stricter infrastructure. Autonomy must be shaped by platform-enforced boundaries that are inspectable, revocable, and independently governed. In cybersecurity operations, where errors propagate at machine speed, and stakes are measured in real-world damage, this distinction is not theoretical.By examining platforms, privacy controls, vault architectures, and execution surfaces as interlocking components of a cognitive control plane, this chapter establishes the infrastructural preconditions required for deploying agentic AI systems responsibly in security-critical environments.8.1 Platforms as cognitive substratesPlatforms have long been recognized as active participants in system behavior rather than neutral execution environments, particularly through their enforcement of identity, isolation, and privilege boundaries (Saltzer and Schroeder 1975, Lampson 2004). Large-scale cloud architectures further encode governance decisions by centralizing control planes that regulate execution, persistence, and observability independently of application logic (Armbrust, et al. 2010, Burns, et al. 2016). Containerization and orchestration frameworks formalize these constraints by separating workload description from execution authority, enabling reproducible and policy-governed runtime behavior (Merkel 2014, Pahl 2015).In security-critical systems, execution context is as consequential as algorithmic capability, as privilege expansion, lateral movement, and persistence failures frequently originate at the platform layer rather than within application code (NIST Joint Task Force 2020). Observability infrastructures, such as distributed tracing and structured telemetry, transform platforms into control instruments by enabling deterministic replay, forensic reconstruction, and post hoc accountability (Sigelman, et al. 2010, Google SRE, 2016). Conversely, platforms optimized primarily for elasticity or developer convenience often collapse identity, execution, and persistence into a single operational plane, increasing systemic exposure under automation (AWS 2024, Microsoft 2023).These findings support the architectural position that agentic behavior is bounded not only by model design but also by platform-enforced constraints on authority, visibility, and actionability. As autonomy increases, the platform increasingly functions as a cognitive substrate, defining not only where computation occurs, but under what conditions reasoning may safely transition into action (Leveson 2012, NIST 2023).The platform is not a neutral substrate; it is a measurable attack surface. To move beyond qualitative descriptions of security , we define the Execution Attack Surface \(\left(\mathbf{A}_{\mathbf{s}}\right)\) as a quantifiable ratio of exposure:\begin{equation} A_{s}=\frac{\sum\text{Authorized\ System\ Calls}}{\sum\text{Total\ Available\ Kernel\ Interface}}\nonumber \\ \end{equation}By calculating the \(A_{s}\) ratio for a containerized agent, we establish a hard limit on agency. As established by (Saltzer and Schroeder 1975), the protection of information requires that every program and user operate with the minimum set of privileges necessary to complete their jobs. In this architecture, an agent with an\(A_{s}>0.05\) is considered over-privileged . The threshold is intentionally conservative, reflecting the principle of minimal kernel exposure for autonomous workloads rather than an empirically fixed constant.Platforms are not neutral deployment environments. They encode assumptions about identity, persistence, isolation, observability, and control, which directly shape the behavior of autonomous and semi-autonomous systems (Saltzer and Schroeder 1975, Lampson 2004). In cognitive defense architectures, the platform serves as the substrate on which reasoning, action, and auditability are constrained (Armbrust, et al. 2010, Burns, et al. 2016). What an agent is allowed to do is therefore inseparable from where it is allowed to exist (Leveson 2012, NIST 2023).Cloud platforms designed for elastic, multi-tenant operation (such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform) offer elastic compute, managed services, and operational convenience at scale (Armbrust, et al. 2010, AWS 2024, Microsoft 2023, Google SRE, 2016). These characteristics are necessary but insufficient for agentic systems operating in security-critical environments (Leveson 2012). Elasticity describes how resources expand and contract; it does not describe how authority is bounded (Saltzer and Schroeder 1975, Lampson 2004). For cognitive defense, the decisive property of a platform is whether it supports bounded execution with enforceable guarantees (NIST Joint Task Force 2020, Miller, Yee and Shapiro 2003).A platform suitable for agentic cybersecurity must make constraints explicit rather than implicit (Burns, et al. 2016, Levan 2024, The Kubernetes Authors, 2026). Identity must be scoped to execution contexts rather than embedded in application logic (NIST Joint Task Force 2020, Miller, Yee and Shapiro 2003).. Isolation must separate reasoning processes from action surfaces to prevent lateral privilege expansion (Saltzer and Schroeder 1975, Lampson 2004). Persistence must be optional and governed, not an ambient default (Carlini, et al. 2021, Shokri, et al. 2017). Observability must be continuous and structured, enabling the reconstruction of decisions after the fact rather than retrospective interpretation (Sigelman, et al. 2010, Google SRE, 2016).True isolation requires the hardware itself to become a party to the security contract. Protecting sensitive inference workloads increasingly requires cryptographic isolation at the hardware level. Trusted Execution Environments (TEEs ) provide this capability by encrypting memory and restricting host-level inspection during model execution (Skyflow Inc. 2023).“‘yaml# Canon Specification: Sanctuary Execution Policyexecution_surface_policy:isolation_level: ”Hardware-Encrypted-TEE” # Cryptographic hardware isolationsecret_provider: ”Vault-Sidecar” # No local secret storagenetwork_egress: ”Restricted-VLAN-Only” # No public internet accesspersistence: ”None-Ephemeral-Only” # Disk wipes on task completion“‘In a production environment (e.g., Google Cloud’s Confidential GKE ), this materializes as encrypted memory at the silicon level. The agent’s thought process remains opaque even to the host operating system. We define this requirement through the followingExecution Surface Policy :Within this framing, cognitive defense systems require platforms that can:Enforce identity-scoped execution, ensuring that actions are always attributable to a defined role, policy, or trust domain (NIST Joint Task Force 2020)Provide strong isolation between reasoning and action, preventing cognitive components from directly exercising authority (Saltzer and Schroeder 1975, Miller, Yee and Shapiro 2003).Support deterministic replay and forensic reconstruction, allowing decisions to be examined under the same constraints in which they were made (Sigelman, et al. 2010, Doshi-Velez and Kim 2017)Treat telemetry as a first-class control signal rather than a debugging artifact, closing the loop between observation and governance (Google SRE, 2016, Leveson 2012)Platforms optimized primarily for throughput, latency reduction, or cost efficiency tend to blur these boundaries (AWS 2024, Microsoft 2023). Convenience abstractions often collapse identity, execution, and persistence into a single operational plane, increasing the difficulty of containment once autonomy is introduced (NIST Joint Task Force 2020). In contrast, platforms that expose fine-grained controls over execution context, privilege boundaries, and auditability enable architectures in which autonomy is contained by design rather than corrected after failure (Leveson 2012, NIST 2023).In this sense, platforms do not merely host cognitive systems; they define the conditions under which cognition can safely operate (Armbrust, et al. 2010, Leveson 2012). The substrate precedes the agent (Lampson 2004).Figure 8.1 depicts a layered, platform-mediated constraint architecture that separates cognition from authority. Identity, execution, and persistence are managed by infrastructure-level control planes rather than being embedded within agent logic. Execution privileges, identity scope, and persistence are enforced at the platform level, which enables bounded autonomy and auditability.
CONCURRENT SMALL INTESTINAL NEOPLASIA AND LARGE COLON VOLVULUS IN A STANDARDBRED YEAR...
Vânia Barros
João Cascais

Vânia Barros

and 4 more

March 17, 2026
Large colon volvulus is a common, life-threatening cause of colic in horses. Gastrointestinal neoplasia is rare, especially in young horses, but it may contribute to acute abdominal pain. A 1-year-old Standardbred colt presented with acute colic unresponsive to medical treatment. An exploratory laparotomy identified extensive gastrointestinal pathology, and euthanasia was elected due to a poor prognosis. Necropsy revealed a 360-degree volvulus of the caecum and large colon with adhesions, enterotyphlocolitis, and a well-circumscribed mesenteric mass arising from the small intestine. Histopathology in combination with immunohistochemistry (IHC) confirmed the mass as a soft tissue sarcoma (leiomyosarcoma). This case describes the coexistence of intestinal neoplasia and large colon volvulus in a juvenile horse, emphasising the diagnostic challenges and the potential for neoplasms to precipitate acute colic.
Robotic Strategy Adaptation via Monte Carlo Regret Minimization in Uncertain Multi-Ag...
Sushant Shivankar

Sushant Shivankar

March 18, 2026
This paper introduces a novel algorithmic framework that adapts Monte Carlo Counterfactual Regret Minimization (MCCFR) to real-time robotic decision-making in dynamic, imperfect-information scenarios. By integrating incremental tree search and targeted sampling, our method enables autonomous agents-such as multi-robot systems operating under partial observability-to compute near-equilibrium strategies online with limited computational resources. We demonstrate the approach's convergence guarantees in simulation-based adversarial settings, where robotic agents must conceal private sensor data while inferring opponent intent. Experimental results in simulated tactical pursuit-evasion and distributed resource competition games confirm that our algorithm reduces exploitability over time and outperforms existing imperfect-information search methods, providing a principled foundation for robust robotic interaction under uncertainty.
GALAR-TemporalNet: Anatomy-Guided Temporal Multi-Label Classification with Bidirectio...
Jiye Won

Jiye Won

and 2 more

March 18, 2026
A document by Jiye Won. Click on the document to view its contents.
POLARIS: A Political-Aware Large Multimodal Model for Analyzing Technology Deployment...
Haoran Ji

Haoran Ji

and 1 more

March 18, 2026
The pervasive deployment of technology in conflict-affected and politically sensitive regions presents complex narratives, often framed as both "stability maintenance" and instruments of "oppression." Existing large language and multimodal models frequently exhibit a "technological neutrality fallacy," failing to capture deep political intentions, implicit biases, and cross-modal consistency. To address this, we introduce Technology Deployment Framing Analysis (TDFA), a novel task encompassing stance classification, deployment intent recognition, cross-modal consistency analysis, and implicit bias detection. We propose POLARIS, a Political-Aware Large Multimodal Model framework built upon prevailing architectures, underpinned by our core philosophy of Political Context Injection. POLARIS integrates a Contextual Knowledge Adapter for geopolitical knowledge infusion, a Framing-Aware Attention mechanism to highlight politically salient features, and a Bias Calibration Head optimized for task-specific outputs. For evaluation, we construct TechConflict-23K, a substantial multimodal dataset with professionally annotated samples. Our multi-stage training strategy includes instruction tuning, a crucial bias alignment phase to penalize "neutral hallucination," and contrastive alignment for cross-modal consistency. Extensive experiments demonstrate that POLARIS achieves state-of-the-art performance across all TDFA sub-tasks, significantly outperforming strong baselines, including advanced multimodal models. These results validate POLARIS's ability to provide more accurate and objective analytical tools for discerning the complex realities and impacts of technology deployment in sensitive geopolitical contexts.
VACUUM HARDWARE MANUAL Deterministic Derivation of α −1 and the End of Empirical Ambi...
Heiko Grimberg

Heiko Grimberg

March 18, 2026
The Unified Chronofractal Field (UCF) framework has reached numerical inevitability. This specification documents the hardware architecture of the 3D vacuum, proving that the finestructure constant (α −1) is not an empirical variable, but a deterministic output of a 14mode Bravais lattice constraint. By formulating the topological impedance of fractal time (ν) against the spatial manifold (π), the k = 0 protocol eliminates the need for free parameters or fine-tuning in quantum electrodynamics. The universe is not a choice; it is a strict geometric execution.
← Previous 1 2 … 12 13 14 15 16 17 18 19 20 … 2754 2755 Next →

| Powered by Authorea.com

  • Home