AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,105 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Measuring river surface velocity using UAS-borne Doppler radar
Zhen Zhou
Laura Riis-Klinkvort

Zhen Zhou

and 19 more

March 05, 2024
Using Unmanned Aerial Systems (UAS) equipped with optical RGB cameras and Doppler radar, surface velocity can be efficiently measured at high spatial resolution. UAS-borne Doppler radar is particularly attractive because it is suitable for real-time velocity determination, because the measurement is contactless, and because it has fewer limitations than image velocimetry techniques. In this paper, five cross-sections (XSs) were surveyed within a 10 km stretch of Rönne Å in Sweden. Ground-truth surface velocity observations were retrieved with an electromagnetic velocity sensor (OTT MF Pro) along the XS at 1 m spacing. Videos from a UAS RGB camera were analyzed using both Particle Image Velocimetry (PIV) and Space-Time Image Velocimetry (STIV) techniques. Furthermore, we recorded full waveform signal data using a Doppler radar at multiple waypoints across the river. An algorithm fits two alternative models to the average amplitude curve to derive the correct river surface velocity: a Gaussian one peak model, or a Gaussian two peak model. Results indicate that river flow velocity and propwash velocity caused by the drone can be found in XS where the flow velocity is low, while the drone-induced propwash velocity can be neglected in fast and highly turbulent flows. To verify the river flow velocity derived from Doppler radar, a mean PIV value within the footprint of the Doppler radar at each waypoint was calculated. Finally, quantitative comparisons of OTT MF Pro data with STIV, mean PIV and Doppler radar revealed that UAS-borne Doppler radar could reliably measure the river surface velocity.
Numerical Simulations of Scalar Transport Across the Sediment-Water Interface using a...
Edwin Rafael Aponte-Cruz
Sylvia Rodríguez-Abudo

Edwin Rafael Aponte-Cruz

and 1 more

March 05, 2024
The two-phase flow model SedFOAM was implemented along with an advection-diffusion solver to further understand temporal and spatial variability of scalar fluxes near a permeable rippled bed and their response to turbulent oscillatory flow conditions, sediment transport, and ripple migration. Numerical experiments were performed with scalars introduced by constant point sources at four different along-ripple locations. A time-dependent analysis of centroid location, spread, and area of the scalar plumes revealed distinct patterns for each injection point. Dye ejected from the left flank traveled the furthest, spreading up to $0.85\lambda$. The crest and right flank cases spread a total distance of $0.49\lambda$. The trough case exhibited the lowest spread of dye, with larger values near flow reversal likely due to vortex formation and ejection induced by pressure differences. Bulk statistics of turbulent scalar fluxes showed vertical fluxes dominating over horizontal fluxes, with maximum vertical-to-horizontal ratios of 1.6 on the right flank to 6.7 on the left flank. Flank cases exhibited the largest time-averaged normalized turbulent fluxes, with vertical values close to $1\times 10^{-3}$, which approximately doubles the $4.5\times 10^{-4}$ and $5.7\times 10^{-4}$ values of the crest and trough cases, respectively. These results highlight the role of flow characteristics and sediment dynamics in shaping scalar transport near the sediment-water interface.
Thermoluminescence and Apollo 17 ANGSA lunar samples: NASA’s fifty-year experiment an...
Derek Sears
Alexander Sehlke

Derek WG Sears

and 2 more

March 05, 2024
By placing Apollo 17 regolith samples in a freezer, and storing an equivalent set at room temperature, NASA effectively performed a fifty-year experiment in the kinetics of natural thermoluminescence (TL) of the lunar regolith. We have performed a detailed analysis of the TL characteristics of a sunlit sample near the landing site (70180), a sample 3 m deep near the landing site (70001), a sample partially shaded by a boulder (72320), and a sample completely shaded by a boulder (76240). We find eight discrete TL peaks, five apparent in curves for samples in the natural state, seven in samples irradiated in the laboratory at room temperature. For each peak we suggest values for peak temperatures and the kinetic parameters E (activation energy, i.e. “trap depth”, eV) and s (Arrhenius factor, s-1). The lowest natural TL peak in the continuously shaded sample 76240 dropped in intensity by 60±10% (1976 vs. present room temperature samples) and 43±8% (freezer vs room temperature samples) over the 50-year storage period, while the other samples showed no change. These results are consistent with our E and s parameters. The large number of peaks, and the appearance of additional peaks after irradiation , and literature data, suggest that glow curve peaks are present in lunar regolith at ~100 K and their intensity can be used to determine storage times at these temperatures. Thus a TL instrument on the Moon could be used to prospect for a micro-cold traps capable of deposition, build-up and storage of volatiles.
Increased Summer Monsoon Rainfall over Northwest India caused by Hadley Cell Expansio...
Ligin Joseph
Nikolaos Skliris

Ligin Joseph

and 4 more

March 05, 2024
The Indian summer monsoon precipitation trend from 1979 to 2022 shows a substantial 40% increase over Northwest India, which is in agreement with the future projections of the Coupled Model Intercomparison Project 6 (CMIP6). The observationally constrained reanalysis dataset reveals that a prominent sea surface warming in the western equatorial Indian Ocean and the Arabian Sea might be responsible for the rainfall enhancement through strengthening the cross-equatorial monsoonal flow and associated evaporation. We show that the cross-equatorial monsoon winds over the Indian Ocean are strengthening due to the merging of Pacific Ocean trade winds and rapid Indian Ocean warming. These winds also enhance the latent heat flux (evaporation), and in combination, this results in increased moisture transport from the ocean toward the land.
Dominant Flood Types in Europe and Their Role in Flood Statistics
Svenja Fischer
Andreas Schumann

Svenja Fischer

and 1 more

March 05, 2024
Flood events in Europe are characterised by different generating mechanisms determining large varieties of peaks, volumes and hydrographs. Understanding such mechanisms is crucial not only for deterministic or stochastic modelling of floods, but also for practical purposes such as hydrological design . In this study, the driving mechanisms of floods are analysed and the associated catchment and atmospheric attributes controlling these flood types are identified through a classification and regression tree approach. In addition, the relevance of flood types for estimations of flood probabilities is analysed using type-based flood statistics. It is shown which flood types dominate the more frequent floods and which flood types are most relevant for extreme flood events. Ordinary and extraordinary floods are identified by a Likelihood-Ratio test and tested for a significant difference in the frequency distribution of flood types. Our results show that the flood types vary regionally in Europe with distinct clusters. In the Alpine region, heavy rainfall floods are responsible for the most extreme flood events, while in the northern parts of Europe flood events caused by snowmelt lead to the largest peaks. This is reflected in the flood statistics by the type-specific distributions, which have a different tail heaviness. The results of this study demonstrate that flood types are a main explanatory factor when detecting regional patterns of statistical attributes of flood events.
Fe2+ partitioning in Al-free pyrolite: consequences for seismic velocities and hetero...
Jingyi Zhuang
Renata Wentzcovitch

Jingyi Zhuang

and 1 more

March 15, 2024
Iron partitioning among the main lower mantle phases, bridgmanite (Bm) and ferropericlase (Fp), has non-monotonic behavior owing to the high-spin to low-spin crossover in ferrous iron (Fe2+) in Fp. Results of previous studies of the iron partitioning coefficient between these phases, KD, still have considerable uncertainty. Here, we investigate the Fe2+ partitioning behavior using well-documented ab initio free energy results plus new updates. Although we focus on Fe2+ only, we describe the effect of this iron spin crossover (ISC) on KD and of the latter on compositions and seismic velocities in a pyrolitic aggregate. Our results suggest that its velocities are mainly affected by the ISC and less so by the Fe2+ partitioning. In contrast, iron partitioning manifests in thermally induced velocity heterogeneity ratios. Prediction of the seismological parameter RS/P (∂lnVS/∂lnVP) including iron partitioning effects resembles quantitatively RS/P’s inferred from several tomographic studies down to 2,400 km depth.
Autocorrelation - A Simple Diagnostic for Tropical Precipitation Variability in Globa...
Dorian Spät
Michela Biasutti

Dorian Spät

and 3 more

July 17, 2024
We propose the lag-1 autocorrelation of daily precipitation as a simple diagnostic of tropical precipitation variability in climate models. This metric generally has a relatively uniform distribution of positive values across the tropics. However, selected land regions are characterized by exceptionally low autocorrelation values. Low values correspond to the dominance of high frequency variance in precipitation, and specifically of high frequency convectively coupled equatorial waves. Consistent with previous work, we show that CMIP6 climate models overestimate the autocorrelation. Global kilometer-scale models capture the observed autocorrelation when deep convection is explicitly simulated. When a deep convection parameterization is used, though, the autocorrelation increases over land and ocean, suggesting that land surface-atmosphere interactions are not responsible for the changes in autocorrelation. Furthermore, the metric also tracks the accuracy of the representation of the relative importance of high frequency and low frequency convectively coupled equatorial waves in the models.
Small fish biomass limits the catch potential in the High Seas
Jerome Guiet
Daniele Bianchi

Jerome Guiet

and 4 more

March 05, 2024
The High Seas, lying beyond the boundaries of nations’ Exclusive Economic Zones, cover the majority of the ocean surface and host roughly two thirds of marine primary production. Yet, only a small fraction of global wild fish catch comes from the High Seas, despite intensifying industrial fishing efforts. The surprisingly small fish catch could reflect economic features of the High Seas - such as the difficulty and cost of fishing in remote parts of the ocean surface - or ecological features resulting in a small biomass of fish relative to primary production. We use the coupled biological-economic model BOATS to estimate contributing factors, comparing observed catches with simulations where: (i) fishing cost depends on distance from shore and seafloor depth; (ii) catchability depends on seafloor depth or vertical habitat extent; (iii) regions with micronutrient limitation have reduced biomass production; (iv) the trophic transfer of energy from primary production to demersal food webs depends on depth; and (v) High Seas biomass migrates to coastal regions. Our results suggest that the most important features are ecological: demersal fish communities receive a large proportion of primary production in shallow waters, but very little in deep waters due to respiration by small organisms throughout the water column. Other factors play a secondary role, with migrations having a potentially large but uncertain role, and economic factors having the smallest effects. Our results stress the importance of properly representing the High Seas biomass in future fisheries projections, and clarify their limited role in global food provision.
Major modes of climate variability dominate nonlinear Antarctic ice-sheet elevation c...
Matt King
Poul Christoffersen

Matt A King

and 1 more

June 21, 2024
We explore the links between elevation variability of the Antarctic Ice Sheet (AIS) and large-scale climate modes. Using multiple linear regression, we quantify the time-cumulative effects of El Niño Southern Oscillation (ENSO) and the Southern Annular Mode (SAM) on gridded AIS elevations. Cumulative ENSO and SAM explain a median of 29% of the partial variance and up to 85% in some coastal areas. After spatial smoothing, these signals have high spatial correlation with those from GRACE gravimetry (r~=0.65 each). Much of the signal is removed by a firn densification model but inter-model differences exist especially for ENSO. At the lower parts of the Thwaites and Pine Island glaciers, near their grounding line, we find the Amundsen Sea Low (ASL) explains ~90% of the observed elevation variability. There, modeled firn effects explain only a small fraction of the variability, suggesting significant height changes could be a response to climatological ice-dynamics.
UAV-based land surface temperatures and vegetation indices explain and predict spatia...
Beyer Matthias
Alberto Iraheta

Beyer Matthias

and 9 more

March 05, 2024
The spatial variation of soil water isotopes (SWI) - representing the baseline for investigating root water uptake (RWU) depths with water stable isotope techniques - has rarely been investigated. Here, we use spatial SWI depth profile sampling in combination with unmanned aerial vehicle (UAV) based land surface temperature estimates and vegetation indices (VI) in order to improving process understanding of the relationships between soil water content and isotope patterns with canopy status. We carried out a spatial sampling of ten SWI depth profiles in a tropical dry forest. UAV data were collected and analyzed to obtain detailed characterization of soil temperature and canopy status. We then performed a statistical analysis between the VI and land surface temperatures with soil water content and SWI values at different spatial resolutions (3 cm to 5 m). Best relationships were used for generating soil water isoscapes for the entire study area. Results suggest that soil water content and SWI values are strongly mediated by canopy parameters (VI). Various VI correlate strongly with soil water content and SWI values across all depths. SWI at the surface depend on land surface temperature (R² of 0.65 for δ18O and 0.57 for δ2H). Strongest overall correlations were found at a spatial resolution of 0.5 m. We speculate that this might be the ideal resolution for spatially characterizing SWI patterns and investigate RWU. Supporting spatial analyses of SWI with UAV-based approaches might be a future avenue for improving the spatial representation and credibility of such studies.
A practical approach for tectonic discrimination of basalts using geochemical data th...
Mengqi Gao
Zhaochong Zhang

Mengqi Gao

and 5 more

March 05, 2024
Identifying the tectonic setting of formation of rocks is an essential component in the field of geosciences. The conventional approach is to employ standard tectonic discrimination diagrams based on elemental correlations and ratios, which sometimes are plagued with uncertainties and limitations. The application of machine learning algorithms based on big data can effectively overcome these problems. In this study, three machine learning algorithms, namely Support Vector Machine, Random Forest, and XGBoost, were employed to classify the various types of basalts from diverse settings such as intraplate basalts, island arc basalts, ocean island basalts, mid-ocean ridge basalts, back-arc basin basalts, oceanic flood basalts, and continental flood basalts into seven tectonic environments. For the altered basalts and fresh basalt, we use 22 relatively immobile elements (TiO2, P2O5, Nb, Ta, Zr, Hf, Y, La, Ce, Pr, Nd, Sm, Eu, Gd, Ho, Er, Yb, Lu, Dy, Tb, Cr, Ni) and 35 major plus trace elements to build discrimination models for seven types of tectonic settings of basalt, respectively. The results indicate that XGBoost demonstrates the best performance in discriminating basalts into seven tectonic settings, achieving an accuracy of 85% and 89% respectively. Compared to previous models, our new method presented in this study is expected to have better practical applications.
Standardized daily high-resolution large-eddy simulations of the Arctic boundary laye...
Niklas Schnierstein
Jan Chylik

Niklas Schnierstein

and 3 more

March 13, 2024
This study utilizes the wealth of observational data collected during the recent MOSAiC drift experiment to constrain and evaluate 190 daily Large-Eddy Simulations (LES) of Arctic boundary layers and clouds at turbulence-resolving resolutions. A standardized approach is adopted to tightly integrate field measurements into the experimental configuration. Covering the full drift represents a step forward from single-case LES studies, and allows for a robust assessment of model performance against independent data under a broad range of atmospheric conditions. A homogeneously forced Eulerian domain is simulated, initialized with radiosonde and value-added cloud profiles. Prescribed boundary conditions include various measured surface characteristics. Time-constant composite forcing is applied, primarily consisting of subsidence rates sampled from reanalysis data. The simulations run for multiple hours, allowing turbulence and mixed-phase clouds to spin up while still facilitating direct comparison to MOSAiC data. Key aspects such as the vertical thermodynamic structure, cloud properties, and surface energy fluxes are satisfactorily reproduced and maintained. Specifically, the model captures the bimodal distribution of atmospheric states that is typical of Arctic climate. Selected days are investigated more closely to assess the model’s skill in maintaining the observed boundary layer structure. The sensitivity to various aspects of the experimental configuration and model physics is tested. The model input and output are available to the scientific community, supplementing the MOSAiC data archive. The close agreement with observed meteorology justifies the use of LES data for gaining further insight into Arctic processes and their role in Arctic climate change.
Projection of Global Future Lightning Occurrence using only Large-Scale Environmental...
Montana Etten-Bohm
Courtney Schumacher

Montana Etten-Bohm

and 3 more

March 15, 2024
This study evaluates a lightning parameterization that utilizes only large-scale environmental variables (i.e., convective available potential energy (CAPE), column moisture, and lifting condensation level (LCL)) for present-day (2017-19) and end-of-century (2098-2100) RCP8.5 climate scenarios in the Community Atmosphere Model version 5 (CAM5). Using a single equation, the present-day prediction can produce a reasonable land/ocean ratio in lightning occurrence. The end-of-century prediction shows relative increases of about 50% over higher-latitude land, but much more variable increases and decreases across mid-latitude ocean and the tropics such that the overall global lightning occurrence is expected to slightly decrease. Lightning occurrence over land predicted from present-day CAM5 is less than that using MERRA-2 reanalysis because of differences in the basic-state variables used as predictors. In addition, the choice of dilute or undilute CAPE will impact future lightning predictions over land, but the environment-only parameterization results are more consistent than a CAPE x precipitation parameterization.
Plume-driven subduction termination in 3-D mantle convection models
Erin Heilman
Thorsten Becker

Erin Heilman

and 1 more

March 05, 2024
The effect of mantle plumes is secondary to that of subducting slabs for modern plate tectonics, e.g. when considering plate driving forces. However, the impact of plumes on tectonics and planetary surface evolution may nonetheless have been significant. We use numerical mantle convection models in a 3-D spherical chunk geometry with damage rheology to study some of the potential dynamics of plume-slab interactions. Substantiating our earlier work which was restricted to 2-D geometries, we observe a range of interesting plume dynamics, including plume-driven subduction terminations, even though the new models allow for more realistic flow. We explore such plume-slab interactions, including in terms of their geometry, frequency, and the overall effect of plumes on surface dynamics as a function of the fraction of internal to bottom heating. Some versions of such plume-slab interplay may be relevant for geologic events, e.g. for the inferred ~183 Ma Karoo large igneous province formation and associated slab disruption. More recent examples may include the impingement of the Afar plume underneath Africa leading to disruption of the Hellenic slab, and the current complex structure imaged for the subduction of the Nazca plate under South America. Our results imply that plumes may play a significant role not just in kick-starting plate tectonics, but also in major modifications of slab-driven plate motions, including for the present-day mantle.
Seismic anisotropy from 6C ground motions of ambient seismic noise
Le Tang
Heiner Igel

Le Tang

and 3 more

March 05, 2024
A document by Le Tang. Click on the document to view its contents.
Bering Strait Ocean Heat Transport Drives Decadal Arctic Variability in a High-Resolu...
Yuchen Li
Wilbert Weijer

Yuchen Li

and 4 more

March 15, 2024
We investigate the role of ocean heat transport (OHT) in driving the decadal variability of the Arctic climate by analyzing the pre-industrial control simulation of a high-resolution climate model. While the OHT variability at 65˚N is greater in the Atlantic, we find that the decadal variability of Arctic-wide surface temperature and sea ice area is much better correlated with Bering Strait OHT than Atlantic OHT. In particular, decadal Bering Strait OHT variability causes significant changes in local sea ice cover and air-sea heat fluxes, which are amplified by shortwave feedbacks. These heat flux anomalies are regionally balanced by longwave radiation at the top of the atmosphere, without compensation by atmospheric heat transport (Bjerknes compensation). The sensitivity of the Arctic to changes in OHT may thus rely on an accurate representation of the heat transport through the Bering Strait, which is difficult to resolve in coarse-resolution ocean models.
Refining the Global Picture: the Impact of Increased Resolution on CO₂ Atmospheric In...
Zoé Lloret
Frederic Chevallier

Zoé Lloret

and 2 more

March 15, 2024
The threat posed by the increasing concentration of carbon dioxide (CO₂) in the atmosphere motivates a detailed and precise estimation of CO₂ emissions and absorptions over the globe. This study refines the spatial resolution of the CAMS/LSCE inversion system, achieving a global resolution of 0.7° latitude and 1.4° longitude, or three times as many grid boxes as the current operational setup. In a two-year inversion assimilating the midday clear-sky retrievals of the column-average dry-air mole fraction of carbon dioxide (XCO₂) from NASA’s second Orbiting Carbon Observatory (OCO-2), the elevated resolution demonstrates an improvement in the representation of atmospheric CO₂, particularly at the synoptic time scale, as validated against independent surface measurements. Vertical profiles of the CO₂ concentration differ slightly above 22 km between resolutions compared to AirCore profiles, and highlight differences in the vertical distribution of CO₂ between resolutions. However, this disparity is not evident for XCO₂, as evaluated against independent reference ground-based observations. Global and regional estimates of natural fluxes for 2015-2016 are similar between the two resolutions, but with North America exhibiting a higher natural sink at high-resolution for 2016. Overall, both inversions seem to yield reasonable estimates of global and regional natural carbon fluxes. The increase in calculation time is less than the increase in the number of operations and in the volume of input data, revealing greater efficiency of the code executed on a Graphics Processing Unit. This allows us to make this higher resolution the new standard for the CAMS/LSCE system.
Fallibilistic conundrum   
Dino Muhović

Dino Muhović

March 05, 2024
AbstractThis paper introduces ’pessimistic meta-deduction,’ a thought experiment that synthesizes Bayesian inference with the infinite monkey theorem and applies it to the concept of fallibilism. The initial (and wrong) blend advocated for a rationalistic analogue of a pessimistic meta-induction. Upon critical self-analysis, a partial refutation of the thought experiment has been revealed. The critique diminished the impact of pessimistic deduction, making it a subtly refined form of fallibilism. However, and most importantly, the implication thereof seems to indicate a potentially mathematically indeterminable problem in the context of any hypothetically true theory, fallibilistic conundrum. IntroductionIn an attempt to create a way to measure the probability of a counterintuitive paradigm (in plant stress physiology, so outside the scope of philosophy) being true, multiple ideas were combined. This particular idea, like many others, did not turn out to be fruitful from the standpoint of what was intended. It synthesized Bayesian probability with the infinite monkey theorem and applied it to the philosophy of science revealing new insights on fallibilism.Infinite monkey theoremIn exploring the philosophical implications of scientific knowledge and discovery, a turn to the illustrative thought experiment known as the Infinite Monkey Theorem was made. This theorem, in its popular form, is philosophical rather than mathematical and serves as a conceptual tool to elucidate ideas of randomness and probability in the context of an unlimited number of attempts.The theorem posits a scenario in which a monkey, engaged in the random act of striking keys on a typewriter, is allotted an infinite duration of time. Under these conditions, the thought experiment demonstrates that the monkey would reproduce any given text, such as a work of Shakespeare (the cited source used Hamlet as an example). In the thought experiment, each keystroke is presumed to be independent and random, devoid of any pattern. [1]In this paper, the infinite monkey theorem assumes metaphorical significance. It suggests that the endless pursuit of scientific inquiry, at least some of which is random, holds the potential to yield an inexhaustible spectrum of theories, data points, and ideas, much like the infinite keystrokes leading to every possible text.Introduction to Bayesian inferenceBayesian inference is a statistical method. It is based on Bayes’ theorem. Unlike frequentist statistics, Bayesian probability is inherently subjective. It measures the degree of belief. Bayesian inference is the process of updating beliefs in light of new evidence. It begins with a prior probability, which represents an initial degree of belief in a hypothesis. As new evidence is observed, this prior probability is updated, leading to a revised probability, known as the posterior probability. The theorem mentioned provides a formula to update beliefs about a hypothesis when presented with new evidence. It calculates the posterior probability by considering the prior probability, the likelihood of the evidence under the hypothesis, and the overall probability of the evidence. Prior beliefs can vary between individuals. With new evidence, however, the posterior beliefs of different individuals tend to converge, albeit that is subject to a similar interpretation of the new evidence. [2]Bayesian probability can be compared to several philosophical positions, notably empiricism and fallibilism. It is in accordance with empiricism by emphasizing evidence in shaping beliefs while also supporting the fallibilist notion that knowledge is a subject to revision.Pessimistic meta-inductionPessimistic meta-induction in the philosophy of science posits that, given the fact that most scientific theories have been falsified, so too will the ones currently accepted. The argument attempts to falsify the notion of scientific realism (with the scientific method, no less, thus it can be viewed as both a deductive and an inductive argument), which holds that successful scientific theories accurately describe reality. Historical examples bolstering it include the phlogiston theory and the ether theory. The phlogiston theory was completely replaced by the discovery of oxygen and the development of modern chemistry. Similarly, the ether theory, which posited a medium for light waves to travel through space, was rendered obsolete by the theory of relativity. [3]Numerous philosophical efforts went into refuting the conclusion, at least one of them claimed that few scientific theories were rejected and that the inductive evidence favors optimism. Ironically, in the first table of the cited paper, among the „uncontroversial“ scientific theories, there are at least three mutually exclusive theories: VSEPR, valence bond theory, and molecular orbital theory. [4] Each predicting the shape of the molecule in a very different way and each reaching, usually, the same conclusion.Contemporary significance of pessimistic meta-induction (chemistry as an example)The gross theoretical incompatibility when it comes to valence bond and molecular orbital theories is well known. [5] The valence bond theory is usually utilized in organic chemistry. It focuses on the concept of electron pairs shared between atoms, forming covalent bonds. It explains molecular shapes through the overlapping of the atomic orbitals between the molecules and through hybridization of the orbitals within one atom, where the atomic orbitals of an atom mix shapes to form new orbitals. Molecular orbital theory is usually preferred in inorganic chemistry. It conceptualizes electrons in molecules as occupying molecular orbitals that extend over the entire molecule. This theory explains not just the shape of molecules but also their magnetic properties and colors, which is unlike the valence bond theory. It should be noted that the source previously cited heterodoxically disagrees with the failure of the valence bond theory to gauge the magnetic properties of oxygen, which is the most commonly used example for its faillure to gauge magnetic property of a compound [5].This dichotomy between theoretical incompatibility and methodological complementarity of these two theories goes to show that pessimistic meta-induction is not irrelevant in the modern context.Materials and methodsThe methods were based on thought experiments. The research, if it could be called that, was unplanned and the insight in the paper was made in error. The goal was to create a statistical tool that would reduce the subjectivity of Bayesian inference. Though the subjective elements cannot entirely be removed therefrom.The original thought experimentThe thought experiment, pessimistic meta-deduction, begins by applying Bayesian inference to the realm of scientific theories. A well-tested theory would still yield a small probability of falsehood due to the Bayesian theorem never truly reaching. The infinite monkey theorem, in its application, generates an infinite number of attempts at falsification. The preliminary conclusion of the deduction is therefore that our current scientific theories are likely to end up falsified, as we are multiplying with a number that approaches infinity.Partial refutation due to Bayesian updatingThe thought experiment’s reliance on Bayesian probability introduces a critical weakness therein: while scientific theories are indeed fallible and subject to change, the likelihood of a well-established theory being completely overturned decreases with ever-more supportive evidence. The partial refutation brings the thought experiment’s conclusion closer to a quantitative analogue of fallibilism. The outcome of the experiment, therefore, does not substantially alter the concept of fallibilism but barely refines it. It should be mentioned that the contemporary fallibilism has many forms already [6].Implications of partial refutation: a mathematically undefinable outcome in true theoriesDeeper examination reveals that the partial refutation of the ’pessimistic meta-deduction’ thought experiment reveals a potentially unresolvable mathematical problem, at least when applied to a hypothetical true paradigm.As the evidence accumulates, the Bayesian probability of a paradigm’s veracity continually increases, asymptotically approaching certainty (a probability of 1). Consequently, the probability of falsification per attempt correspondingly diminishes, ever-nearing zero.On the other hand, due to the infinite amount of time, the opportunity for potential falsification extends without bounds. In other words, the second factor approaches infinity as the first factor approaches zero. The mathematical product, which is the probability of falsification, of these two factors, one diminishing towards zero and the other expanding towards infinity, results in a situation here named the fallibilistic conundrum.DiscussionThere is a possibility that the fallibilistic conundrum can be further refined if a mathematical formulation capable of resolving it is made. There are two components that create the conundrum. The number of opportunities for falsification goes towards infinity, and the perpetual updating drives the probability of falsification per opportunity towards zero as the Bayesian probability approaches one.Since the same variable (time) drives both phenomena, a good mathematical formulation might resolve the conundrum, possibly with the use of limits or calculus. A mathematical resolution of the conundrum, if at all possible, would yield the probability that the theory will withstand the test of time, which would be of importance in science in the narrower sense of the term and in data science as well.It should not go unsaid that any real or hypothetical ability to update the estimate of probability, including the intuitive sense of truth value, would go on to generate a fallibilistic conundrum. Bayesian probability was simply used as an example.Had any probabilistic assessment that was not immune to the Duch book been used, it would have synthesized the outcome of the first thought experiment.ConclusionThe Bayesian probability raises with evidence acquired through time towards one if the theory is true (and the data and the interpretation thereof are not faulty). Consequently, the probability of falsification per attempt falls with it, towards zero. But if there is an infinite amount of time, the number of attempts starts approaching infinity. So it incepts the conundrum where the (subjective) probability of falsification per attempt drops towards zero, but the number of attempts rises towards infinity. The two would have to be multiplied to get the probability of falsification, which possibly cannot be done. Since time as the common variable drives both probabilities, one directly and one indirectly, it might possibly (uncertainly) be resolvable with limits or calculus in the event of a good mathematical formulation of the problem. A potential resolution might not only further refine fallibilism and maybe other aspects of the philosophy of science but also advance data science.LiteratureBanerji, C.R.S.; Mansour, T.; Severini, S. A Notion of Graph Likelihood and an Infinite Monkey Theorem. Journal of Physics A: Mathematical and Theoretical 2013, 47, 035101, doi:10.1088/1751-8113/47/3/035101.Bolstad, W.M.; Curran, J.M. Introduction to Bayesian Statistics, Third Edition; 2016;Laudan, L. A Confutation of Convergent Realism. Philosophy of Science 1981, 48, 19–49, doi:10.1086/288975.Mizrahi, M. The Pessimistic Induction: A Bad Argument Gone Too Far. Synthese 2012, 190, 3209–3226, doi:10.1007/s11229-012-0138-3.Shaik, S.; Danovich, D.; Hiberty, P.C. Valence Bond Theory—Its Birth, Struggles with Molecular Orbital Theory, Its Present State and Future Prospects. Molecules 2021, 26, 1624, doi:10.3390/molecules26061624.Reed, B. Fallibilism. Philosophy Compass 2012, 7, 585–596, doi:10.1111/j.1747-9991.2012.00502.x.Affiliations:At the time of the writing and submission, the author was a PhD candidate at Megatrend University, Faculty of Biofarming, and an analogue of CEO (in the anglosphere) that in Serbia is titled „direktor“ of ecoera doo, Belgrade. The word „doo“ has roughly the same meaning as „LLC“ in the anglosphere or GmbH in Germany, Austria and Switzerland.Conflicts of interest:The author declares no conflicts of interest.
The Gravito-Electro-Magnetic (GEM) Experiment for a graviton theory eking GR
Andrea López de Recalde

Andrea López de Recalde

March 04, 2024
Scrutiny of experimental observations of bending of EM radiation, one of the determinant proofs of General Relativity, suggests a clue of profound significance; gravity appears to deflect EM radiation proportionally to its frequency. Differences in the deflection of light beams and microwave radiation are minimal, but de facto under an accurate statistical exam. The GEM Experiment described in this paper, patent released on December 2022 by the Italian "Ministero delle Imprese e del Made in Italy", aims to definitely validate this theory. The proposed system employs a tunable source of γ-rays, about the Higgs resonance, as the one tendered by the LHC facilities at CERN. The γ-ray source is tunable for being tunable the energy employed to let Hadrons collide. This is essential to check out extra deflections occurring as the energy employed to let Hadrons collide is modulated. The proposal, fully detailed in the text "A critical discussion about Gravitation" ISBN 978-1-7948-5105-4, is based on a critical discussion of the prescriptions suggested by the general theory of relativity. Clues lead to the resurrection of an Ether-like concept, not in the sense of a rigid structure where drifting through, but as a fabric where things of Nature show their existence. This eventually suggests a quantum approach based on a graviton conjecture. In particular GR is not seen as a theory encompassing Newtonian Dynamics, bus as a ND correction which unfortunately misses of the Gravito-ElectroMagnetic (GEM) interaction. The proposed theory wants to eke the general theory of relativity adding a NNLO correction to the NLO one provided by GR. The correctness of this model, definitely substantiated by the outcome of the proposed Experiment, will finally complete the general theory of relativity, bypassing any demonstrative weakness of the most recent PPN formalisms, and eventually tendering a new way of construing Physics.
Modeling Firebrand Spotting in WRF-Fire for Coupled Fire-Weather Prediction
Maria Frediani
Kasra Shamsaei

Maria Frediani

and 6 more

March 15, 2024
This study introduces the firebrand spotting parameterization implemented in WRF-Fire and applies it to the Marshall Fire, Colorado (2021) to demonstrate that, without fire spotting, wind-driven fire simulations cannot accurately represent the fire behavior. Spotting can be a dominant fire spread mechanism in wind-driven events, particularly those that occur in the wildland-urban interface (WUI), such as the Marshall Fire. To simulate these fires, the model’s ability to spot is critical, in that it accelerates the rate of spread and enables the fire to spread over streams and urban features such as highways. The firebrand spotting parameterization was implemented in WRF-Fire to improve simulations of wind-driven fires in a fire-atmosphere coupled system. In the parameterization, particles are generated with a set of fixed firebrand properties, from locations vertically aligned with the fire front. Firebrands are transported using a Lagrangian framework and firebrand physics is represented by a burnout (combustion) parameterization. Fire spots may occur when firebrands land on unburned grid points. The parameterization components are illustrated through idealized simulations and its application is demonstrated through simulations of a devastating real case - the Marshall Fire (Colorado, 2021). The simulations were verified using time of arrival and contingency table metrics. Our metrics show that when fire spots were included in the simulations, fire rate of spread and burn area consistently improved.
Intergovernmental Panel on Climate Change (IPCC) Tier 1 forest biomass estimates from...
Neha Hunka

Neha Hunka

and 26 more

March 04, 2024
Aboveground biomass density (AGBD) estimates from Earth Observation (EO) can be presented with the consistency standards mandated by United Nations Framework Convention on Climate Change (UNFCCC). This article delivers AGBD estimates, in the format of Intergovernmental Panel on Climate Change (IPCC) Tier 1 values for natural forests, sourced from National Aeronautics and Space Administration's (NASA's) Global Ecosystem Dynamics Investigation (GEDI) and Ice, Cloud and land Elevation Satellite (ICESat-2), and European Space Agency's (ESA's) Climate Change Initiative (CCI). It also provides the underlying classification of forests by ecozones, continents and status (primary, young (≤20 years) and old secondary (>20 years)) as geospatial layers. The approaches leverage strengths of various EO-derived datasets, compiled in an open-science framework through the Multi-mission Algorithm and Analysis Platform (MAAP), enabling flexibility to adopt new datasets. EO-based AGBD estimates are expected to contribute to the IPCC Emission Factors Database in support of UNFCCC processes, and the forest classification expected to support the generation of other policy-relevant datasets while reflecting ongoing shifts in global forests with climate change.
Generalized eddy-diffusivity mass-flux (GEM) formulation for the parameterization of...
Cristian V. Vraciu

Cristian V. Vraciu

March 05, 2024
A document by Cristian V. Vraciu. Click on the document to view its contents.
Critical Limitations of the Least Outstanding Request Load Balancing policy in Servic...
Andrea Detti

Andrea Detti

and 1 more

February 27, 2024
A document by Andrea Detti. Click on the document to view its contents.
New approach method for solving nonlinear differential equations of blood flow with n...

Morteza Hamzeh

and 3 more

March 05, 2024
In this paper, effect of physical parameters in presence of magnetic field on heat transfer and flow of third grade nonNewtonian Nanofluid in a porous medium with annular cross sectional analytically has been investigated. The viscosity of Nanofluid categorized in 3 model include constant model and variable models with temperature that in variable category Reynolds Model and Vogel's Model has been used to determine the effect of viscosity in flow filed. analytically solution for velocity, temperature, and nanoparticle concentration are developed by Akbari-Ganji's Method (AGM) that has high proximity with numerical solution (Runge-Kutta 4th-order). Physical parameters that used for extract result for nondimensional variables of nonlinear equations are pressure gradient, Brownian motion parameter, thermophoresis parameter, magnetic field intensity and Grashof number. The results show that the increase in the pressure gradient and Thermophoresis parameter and decrease in the Brownian motion parameter cause the rise in the velocity profile. Also the increase in the Grashof number and decrease in MHD parameter cause the rise in the velocity profile. Furthermore, either increase in Thermophoresis or decrease in Brownian motion parameters results in enhancement in nanoparticle concentration. The highest value of velocity is observed when the Vogel's Model is used for viscosity. I.
← Previous 1 2 … 996 997 998 999 1000 1001 1002 1003 1004 … 2754 2755 Next →

| Powered by Authorea.com

  • Home