AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,104 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Stochastic inversion workflow using the gradual deformation in order to predict and m...
Lorenzo Perozzi
Gloaguen

Lorenzo Perozzi

and 2 more

April 07, 2015
ABSTRACT Due to budget constraints, CCS in deep saline aquifers is often carried out using only one injector well and one control well, which seriously limits infering the dynamics of the CO_2 plume. In such case, monitoring of the plume of CO_2 only rely on geological assumptions or indirect data. In this paper, we present a new two-step stochastic P- and S-wave, density and porosity inversion approach that allows reliable monitoring of CO_2 plume using time-lapse VSP. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian cosimulations. Each realization within a set of static models are then iteratively combined together using a modified gradual deformation optimization technique with the difference between computed and observed raw traces as objective function. In the second step, this statics models serves as input for a CO_2 injection history matching using the same modified gradual deformation scheme. At each gradual deformation step the CO_2 injection is simulated and the corresponding full-wave traces are computed and compared to the observed data. The method has been tested on a synthetic heterogeneous saline aquifer model mimicking the environment of the CO_2 CCS pilot in Becancour area, Quebec. The results show that the set of optimized models of P- and S-wave, density and porosity showed an improved structural similarity with the reference models compared to conventional simulations.
The Resource Identification Initiative: A cultural shift in publishing
Anita Bandrowski
Matthew H. Brush

Anita Bandrowski

and 14 more

March 25, 2015
ABSTRACT A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or answer basic questions such as “How did other studies use resource X?”. To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the methods sections of papers and thereby improve identifiability and reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (i.e. software and databases). RRIDs are assigned by an authoritative database, for example a model organism database, for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central web portal (scicrunch.org/resources). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 papers have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40. Here, we present an overview of the pilot project and its outcomes to date. We show that authors are able to identify resources and are supportive of the goals of the project. Identifiability of the resources post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on reproducibility relating to research resources.
Rapid Environmental Quenching of Satellite Dwarf Galaxies in the Local Group
Andrew Wetzel
Erik Tollerud

Andrew Wetzel

and 2 more

March 06, 2015
In the Local Group, nearly all of the dwarf galaxies ($\mstar\lesssim10^9\msun$) that are satellites within $300\kpc$ (the virial radius) of the Milky Way (MW) and Andromeda (M31) have quiescent star formation and little-to-no cold gas. This contrasts strongly with comparatively isolated dwarf galaxies, which are almost all actively star-forming and gas-rich. This near dichotomy implies a _rapid_ transformation after falling into the halos of the MW or M31. We combine the observed quiescent fractions for satellites of the MW and M31 with the infall times of satellites from the ELVIS suite of cosmological simulations to determine the typical timescales over which environmental processes within the MW/M31 halos remove gas and quench star formation in low-mass satellite galaxies. The quenching timescales for satellites with $\mstar<10^8\msun$ are short, $\lesssim2\gyr$, and decrease at lower $\mstar$. These quenching timescales can be $1-2\gyr$ longer if environmental preprocessing in lower-mass groups prior to MW/M31 infall is important. We compare with timescales for more massive satellites from previous works, exploring satellite quenching across the observable range of $\mstar=10^{3-11}\msun$. The environmental quenching timescale increases rapidly with satellite $\mstar$, peaking at $\approx9.5\gyr$ for $\mstar\sim10^9\msun$, and rapidly decreases at higher $\mstar$ to less than $5\gyr$ at $\mstar>5\times10^9\msun$. Thus, satellites with $\mstar\sim10^9\msun$, similar to the Magellanic Clouds, exhibit the longest environmental quenching timescales.
Ebola virus epidemiology, transmission, and evolution during seven months in Sierra L...
Daniel Park
Gytis Dudas

Daniel Park

and 22 more

March 02, 2015
SUMMARY The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission, and offered important information for outbreak response. Here we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission.
Top-quark electroweak couplings at the FCC-ee
Patrick Janot
Alain Blondel

Patrick Janot

and 3 more

February 26, 2015
INTRODUCTION The design study of the Future Circular Colliders (FCC) in a 100-km ring in the Geneva area has started at CERN at the beginning of 2014, as an option for post-LHC particle accelerators. The study has an emphasis on proton-proton and electron-positron high-energy frontier machines . In the current plans, the first step of the FCC physics programme would exploit a high-luminosity ${\rm e^+e^-}$ collider called FCC-ee, with centre-of-mass energies ranging from below the Z pole to the ${\rm t\bar t}$ threshold and beyond. A first look at the physics case of the FCC-ee can be found in Ref. . In this first look, the focus regarding top-quark physics was on precision measurements of the top-quark mass, width, and Yukawa coupling through a scan of the ${\rm t\bar t}$ production threshold, with $$ comprised between 340 and 350GeV. The expected precision on the top-quark mass was in turn used, together with the outstanding precisions on the Z peak observables and on the W mass, in a global electroweak fit to set constraints on weakly-coupled new physics up to a scale of 100TeV. Although not studied in the first look, measurements of the top-quark electroweak couplings are of interest, as new physics might also show up via significant deviations of these couplings with respect to their standard-model predictions. Theories in which the top quark and the Higgs boson are composite lead to such deviations. The inclusion of a direct measurement of the ttZ coupling in the global electroweak fit is therefore likely to further constrain these theories. It has been claimed that both a centre-of-mass energy well beyond the top-quark pair production threshold and a large longitudinal polarization of the incoming electron and positron beams are crucially needed to independently access the ttγ and the ttZ couplings for both chirality states of the top quark. In Ref. , it is shown that the measurements of the total event rate and the forward-backward asymmetry of the top quark, with 500${\rm fb}^{-1}$ at $=500$GeV and with beam polarizations of ${\cal P} = \pm 0.8$, ${\cal P}^\prime = \mp 0.3$, allow for this distinction. The aforementioned claim is revisited in the present study. The sensitivity to the top-quark electroweak couplings is estimated here with an optimal-observable analysis of the lepton angular and energy distributions of over a million events from ${\rm t\bar t}$ production at the FCC-ee, in the $\ell \nu {\rm q \bar q b \bar b}$ final states (with $\ell = {\rm e}$ or μ), without incoming beam polarization and with a centre-of-mass energy not significantly above the ${\rm t\bar t}$ production threshold. Such a sensitivity can be understood from the fact that the top-quark polarization arising from its coupling to the Z is maximally transferred to the final state particles via the weak top-quark decay ${\rm t \to W b}$ with a 100% branching fraction: the lack of initial polarization is compensated by the presence of substantial final state polarization, and by a larger integrated luminosity. A similar situation was encountered at LEP, where the measurement of the total rate of ${\rm Z} \to \tau^+\tau^-$ events and of the tau polarization was sufficient to determine the tau couplings to the Z, regardless of initial state polarization . This letter is organized as follows. First, the reader is briefly reminded of the theoretical framework. Next, the statistical analysis of the optimal observables is described, and realistic estimates for the top-quark electroweak coupling sensitivities are obtained as a function of the centre-of-mass energy at the FCC-ee. Finally, the results are discussed, and prospects for further improvements are given.
A new method for identifying the Pacific-South American pattern and its influence on...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

February 24, 2015
The Pacific-South American (PSA) pattern is an important mode of climate variability in the mid-to-high southern latitudes. It is widely recognized as the primary mechanism by which the El Niño-Southern Oscillation (ENSO) influences the south-east Pacific and south-west Atlantic, and in recent years has also been suggested as a mechanism by which longer-term tropical sea surface temperature trends can influence the Antarctic climate. This study presents a novel methodology for objectively identifying the PSA pattern. By rotating the global coordinate system such that the equator (a great circle) traces the approximate path of the pattern, the identification algorithm utilizes Fourier analysis as opposed to a traditional Empirical Orthogonal Function approach. The climatology arising from the application of this method to ERA-Interim reanalysis data reveals that the PSA pattern has a strong influence on temperature and precipitation variability over West Antarctica and the Antarctic Peninsula, and on sea ice variability in the adjacent Amundsen, Bellingshausen and Weddell Seas. Identified seasonal trends towards the negative phase of the PSA pattern are consistent with warming observed over the Antarctic Peninsula during autumn, but are inconsistent with observed winter warming over West Antarctica. Only a weak relationship is identified between the PSA pattern and ENSO, which suggests that the pattern might be better conceptualized as preferred regional atmospheric response to various external (and internal) forcings.
The spin rate of pre-collapse stellar cores: wave driven angular momentum transport i...
Jim Fuller
Matteo Cantiello

Jim Fuller

and 4 more

February 22, 2015
The core rotation rates of massive stars have a substantial impact on the nature of core collapse supernovae and their compact remnants. We demonstrate that internal gravity waves (IGW), excited via envelope convection during a red supergiant phase or during vigorous late time burning phases, can have a significant impact on the rotation rate of the pre-SN core. In typical (10 M⊙ ≲ M ≲ 20 M⊙) supernova progenitors, IGW may substantially spin down the core, leading to iron core rotation periods $P_{\rm min,Fe} \gtrsim 50 \, {\rm s}$. Angular momentum (AM) conservation during the supernova would entail minimum NS rotation periods of $P_{\rm min,NS} \gtrsim 3 \, {\rm ms}$. In most cases, the combined effects of magnetic torques and IGW AM transport likely lead to substantially longer rotation periods. However, the stochastic influx of AM delivered by IGW during shell burning phases inevitably spin up a slowly rotating stellar core, leading to a maximum possible core rotation period. We estimate maximum iron core rotation periods of $P_{\rm max,Fe} \lesssim 10^4 \, {\rm s}$ in typical core collapse supernova progenitors, and a corresponding spin period of $P_{\rm max, NS} \lesssim 400 \, {\rm ms}$ for newborn neutron stars. This is comparable to the typical birth spin periods of most radio pulsars. Stochastic spin-up via IGW during shell O/Si burning may thus determine the initial rotation rate of most neutron stars. For a given progenitor, this theory predicts a Maxwellian distribution in pre-collapse core rotation frequency that is uncorrelated with the spin of the overlying envelope.
Software Use in Astronomy: An Informal Survey
Ivelina Momcheva
Erik Tollerud

Ivelina Momcheva

and 1 more

February 08, 2015
INTRODUCTION Much of modern Astronomy research depends on software. Digital images and numerical simulations are central to the work of most astronomers today, and anyone who is actively involved in astronomy research has a variety of software techniques in their toolbox. Furthermore, the sheer volume of data has increased dramatically in recent years. The efficient and effective use of large data sets increasingly requires more than rudimentary software skills. Finally, as astronomy moves towards the open code model, propelled by pressure from funding agencies and journals as well as the community itself, readability and reusability of code will become increasingly important (Figure [fig:xkcd]). Yet we know few details about the software practices of astronomers. In this work we aim to gain a greater understanding of the prevalence of software tools, the demographics of their users, and the level of software training in astronomy. The astronomical community has, in the past, provided funding and support for software tools intended for the wider community. Examples of this include the Goddard IDL library (funded by the NASA ADP), IRAF (supported and developed by AURA at NOAO), STSDAS (supported and developed by STScI), and the Starlink suite (funded by PPARC). As the field develops, new tools are required and we need to focus our efforts on ones that will have the widest user base and the lowest barrier to utilization. For example, as our work here shows, the much larger astronomy user base of Python relative to the language R suggests that tools in the former language are likely to get many more users and contributers than the latter. More recently, there has been a growing discussion of the importance of data analysis and software development training in astronomy (e.g., the special sessions at the 225th AAS “Astroinformatics and Astrostatistics in Astronomical Research Steps Towards Better Curricula” and “Licensing Astrophysics Codes”, which were standing room only). Although astronomy and astrophysics went digital long ago, the formal training of astronomy and physics students rarely involves software development or data-intensive analysis techniques. Such skills are increasingly critical in the era of ubiquitous “Big Data” (e.g., , or the 2015 NOAO Big Data conference). Better information on the needs of researchers as well as the current availability of training opportunities (or lack thereof) can be used to inform, motivate and focus future efforts towards improving this aspect of the astronomy curriculum. In 2014 the Software Sustainability Institute carried out an inquiry into the software use of researchers in the UK (, see also the associated presentation). This survey provides useful context for software usage by researchers, as well as a useful definition of “research software”: Software that is used to generate, process or analyze results that you intend to appear in a publication (either in a journal, conference paper, monograph, book or thesis). Research software can be anything from a few lines of code written by yourself, to a professionally developed software package. Software that does not generate, process or analyze results - such as word processing software, or the use of a web search - does not count as ‘research software’ for the purposes of this survey. However, this survey was limited to researchers at UK institutions. More importantly, it was not focused on astronomers, who may have quite different software practices from scientists in other fields. Motivated by these issues and related discussions during the .Astronomy 6 conference, we created a survey to explore software use in astronomy. In this paper, we discuss the methodology of the survey in §[sec:datamethods], the results from the multiple-choice sections in §[sec:res] and the free-form comments in §[sec:comments]. In §[sec:ssicompare] we compare our results to the aforementioned SSI survey and in §[sec:conc] we conclude. We have made the anonymized results of the survey and the code to generate the summary figures available at https://github.com/eteq/software_survey_analysis. This repository may be updated in the future if a significant number of new respondents fill out the survey[1]. [1] http://tinyurl.com/pvyqw59
A minimum standard for publishing computational results in the weather and climate sc...
Damien Irving

Damien Irving

January 14, 2015
Weather and climate science has undergone a computational revolution in recent decades, to the point where all modern research relies heavily on software and code. Despite this profound change in the research methods employed by weather and climate scientists, the reporting of computational results has changed very little in relevant academic journals. This lag has led to something of a reproducibility crisis, whereby it is impossible to replicate and verify most of today’s published computational results. While it is tempting to simply decry the slow response of journals and funding agencies in the face of this crisis, there are very few examples of reproducible weather and climate research upon which to base new communication standards. In an attempt to address this deficiency, this essay describes a procedure for reporting computational results that was employed in a recent _Journal of Climate_ paper. The procedure was developed to be consistent with recommended computational best practices and seeks to minimize the time burden on authors, which has been identified as the most important barrier to publishing code. It should provide a starting point for weather and climate scientists looking to publish reproducible research, and it is proposed that journals could adopt the procedure as a minimum standard.
Number unit, Hilbert Space, Quantum Number Theory.
Benedict Irwin

Benedict Irwin

March 16, 2026
ABSTRACT Investigate what I percieve to be a Hilbert Space of numbers. I use the concept of a number unit, in analogy to length, area, volume etc. Such that a prime has dimensions of p. Compunds and partitions are visualised.
IEDA EarthChem: Supporting the sample-based geochemistry community with data resource...
Leslie Hsu

Leslie Hsu

April 06, 2017
ABSTRACT Integrated sample-based geochemical measurements enable new scientific discoveries in the Earth sciences. However, integration of geochemical data is difficult because of the variety of sample types and measured properties, idiosyncratic analytical procedures, and the time commitment required for adequate documentation. To support geochemists in integrating and reusing geochemical data, EarthChem, part of IEDA (Integrated Earth Data Applications), develops and maintains a suite of data systems to serve the scientific community. The EarthChem Library focuses on dataset publication, accessibility, and linking with other sources. Topical synthesis databases (e.g., PetDB, SedDB, Geochron) integrate data from several sources and preserve metadata associated with analyzed samples. The EarthChem Portal optimizes data discovery and provides analysis tools. Contributing authors obtain citable DOI identifiers, usage reports of their data, and increased discoverability. The community benefits from open access to data leading to accelerated scientific discoveries. Growing citations of EarthChem systems demonstrate its success.
Parameter estimation on gravitational waves from neutron-star binaries with spinning...
Ben Farr
Christopher P L Berry

Ben Farr

and 16 more

December 11, 2014
INTRODUCTION As we enter the advanced-detector era of ground-based gravitational-wave (GW) astronomy, it is critical that we understand the abilities and limitations of the analyses we are prepared to conduct. Of the many predicted sources of GWs, binary neutron-star (BNS) coalescences are paramount; their progenitors have been directly observed , and the advanced detectors will be sensitive to their GW emission up to ∼400 Mpc away . When analyzing a GW signal from a circularized compact binary merger, strong degeneracies exist between parameters describing the binary (e.g., distance and inclination). To properly estimate any particular parameter(s) of interest, the marginal distribution is estimated by integrating the joint posterior probability density function (PDF) over all other parameters. In this work, we sample the posterior PDF using software implemented in the LALINFERENCE library . Specifically we use results from LALINFERNCE_NEST , a nest sampling algorithm , and LALINFERENCE_MCMC , a Markov-chain Monte Carlo algorithm \citep[chapter 12]{Gregory2005}. Previous studies of BNS signals have largely assessed parameter constraints assuming negligible neutron-star (NS) spin, restricting models to nine parameters. This simplification has largely been due to computational constraints, but the slow spin of NSs in short-period BNS systems observed to date \citep[e.g.,][]{Mandel_2010} has also been used as justification. However, proper characterization of compact binary sources _must_ account for the possibility of non-negligible spin; otherwise parameter estimates will be biased . This bias can potentially lead to incorrect conclusions about source properties and even misidentification of source classes. Numerous studies have looked at the BNS parameter estimation abilities of ground-based GW detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory \citep[aLIGO;][]{Aasi_2015} and Advanced Virgo \citep[AdV;][]{Acernese_2014} detectors. assessed localization abilities on a simulated non-spinning BNS population. looked at several potential advanced-detector networks and quantified the parameter-estimation abilities of each network for a signal from a fiducial BNS with non-spinning NSs. demonstrated the ability to characterize signals from non-spinning BNS sources with waveform models for spinning sources using Bayesian stochastic samplers in the LALINFERENCE library . used approximate methods to quantify the degeneracy between spin and mass estimates, assuming the compact objects’ spins are aligned with the orbital angular momentum of the binary \citep[but see][]{Haster_2015}. simulated a collection of loud signals from non-spinning BNS sources in several mass bins and quantified parameter estimation capabilities in the advanced-detector era using non-spinning models. introduced precession from spin–orbit coupling and found that the additional richness encoded in the waveform could reduce the mass–spin degeneracy, helping BNSs to be distinguished from NS–black hole (BH) binaries. conducted a similar analysis of a large catalog of sources and found that it is difficult to infer the presence of a mass gap between NSs and BHs , although, this may still be possible using a population of a few tens of detections . Finally, and the follow-on represent an (almost) complete end-to-end simulation of BNS detection and characterization during the first 1–2 years of the advanced-detector era. These studies simulated GWs from an astrophysically motivated BNS population, then detected and characterized sources using the search and follow-up tools that are used for LIGO–Virgo data analysis . The final stage of the analysis missing from these studies is the computationally expensive characterization of sources while accounting for the compact objects’ spins and their degeneracies with other parameters. The present work is the final step of BNS characterization for the simulations using waveforms that account for the effects of NS spin. We begin with a brief introduction to the source catalog used for this study and in section [sec:sources]. Then, in section [sec:spin] we describe the results of parameter estimation from a full analysis that includes spin. In section [sec:mass] we look at mass estimates in more detail and spin-magnitude estimates in section [sec:spin-magnitudes]. In section [sec:extrinsic] we consider the estimation of extrinsic parameters: sky position (section [sec:sky]) and distance (section [sec:distance]), which we do not expect to be significantly affected by the inclusion of spin in the analysis templates. We summarize our findings in section [sec:conclusions]. A comparison of computational costs for spinning and non-spinning parameter estimation is given in appendix [ap:CPU].
A search for R-parity violating Supersymmetric top decays at CMS with \(\sqrt{s} = 8\...
Alec Aivazis
Ari Kaplan

Alec Aivazis

and 1 more

September 27, 2021
A search for a supersymmetric top decay assuming a 100% branching ratio of $^* \rightarrow \mu^+ \mu ^- b $ is presented using a minimally supersymmetric model at an integrated luminosity of 19.5 fb−1. The datasets were recorded with the CMS detector at the LHC. Using Baysian marginalization, an upper limit on the cross section of this process is computed and a cut off point is calculated below which the data does not support the presence of the target decay. This cut off point was calculated to be around 780 GeV.
Golden Ratio in the Hydrogen Atom
Benedict Irwin

Benedict Irwin

March 13, 2026
ABSTRACT I plot the energy levels of the hydrogen atom in the form j(E), where, j is the total angular momentum quantum number. Then by solving for a bifurcating form of the energies I stumbled across the golden ratio as a coefficient of a function that brings us from one energy level Enj to Enj ± 1.
Recursive Integrals
Benedict Irwin

Benedict Irwin

March 13, 2026
ABSTRACT An attempt to elucidate a form of recursive integral was made. A link to chaotic systems in bifurcation diagrams was found as the iterative solution to an integral under variation of a general parameter in the integrand. This could propose in some sense the integral having multiple values, with the solution a scaled varient of the logistic map. Approximate piecewise functions are used to map the bifurcation pattern in an attempt to close the form of the recursive integral for small numbers of bifurcations. In the process a potential connection between the Embree-Trefethen constant, Feigenbaum Constant δ, and the closely fitting functional forms of the integral is exhibited. A highly (elliptic hyperbolic?) transformation is defined to move from one bifurcation to the other, a repeated application adds intricate (bound under a value) structure in the region r ∈ [0, 4].
Notes on Mixing Length Theory
Matteo Cantiello
Yan-Fei Jiang

Matteo Cantiello

and 1 more

December 01, 2021
MLT These notes are mostly inspired from reading Cox & Giuli (“Principles of stellar structure”); some insights are from A.Maeder (“Physics, formation and evolution of rotating stars”) and Kippenhahn & Weigert (“Stellar Structure and Evolution”) Pressure Scale Height In hydrostatic equilibrium we can define the total pressure scale height, $\hp$, as -{\D r} \equiv {\hp} = {\p} where $\p$ is the total pressure ($ + $). The pressure scale height is a measure of the distance over which the pressure changes by an appreciable fraction of itself.
A novel approach to diagnosing Southern Hemisphere planetary wave activity and its in...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

November 16, 2014
Southern Hemisphere mid-to-upper tropospheric planetary wave activity is characterized by the superposition of two zonally-oriented, quasi-stationary waveforms: zonal wavenumber one (ZW1) and zonal wavenumber three (ZW3). Previous studies have tended to consider these waveforms in isolation and with the exception of those studies relating to sea ice, little is known about their impact on regional climate variability. We take a novel approach to quantifying the combined influence of ZW1 and ZW3, using the strength of the hemispheric meridional flow as a proxy for zonal wave activity. Our methodology adapts the wave envelope construct routinely used in the identification of synoptic-scale Rossby wave packets and improves on existing approaches by allowing for variations in both wave phase and amplitude. While ZW1 and ZW3 are both prominent features of the climatological circulation, the defining feature of highly meridional hemispheric states is an enhancement of the ZW3 component. Composites of the mean surface conditions during these highly meridional, ZW3-like anomalous states (i.e. months of strong planetary wave activity) reveal large sea ice anomalies over the Amundsen and Bellingshausen Seas during autumn and along much of the East Antarctic coastline throughout the year. Large precipitation anomalies in regions of significant topography (e.g. New Zealand, Patagonia, coastal Antarctica) and anomalously warm temperatures over much of the Antarctic continent were also associated with strong planetary wave activity. The latter has potentially important implications for the interpretation of recent warming over West Antarctica and the Antarctic Peninsula.
Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preproce...
Andrew Wetzel
Alis Deason

Andrew Wetzel

and 2 more

October 31, 2014
In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with $\mstar=10^{3-9} \msun$ using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically $5-8 \gyr$ ago at z = 0.5 − 1. However, they first fell into any host halo typically $7-10 \gyr$ ago at z = 0.7 − 1.5. This difference arises because many satellites experienced “group preprocessing” in another host halo, typically of $\mvir \sim 10^{10-12} \msun$, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; half of all satellites with $\mstar < 10^6 \msun$ were preprocessed in a group. Infalling groups also drive most satellite-satellite mergers within the MW/M31 halos. Finally, _none_ of the surviving satellites at z = 0 were within the virial radius of their MW/M31 halo during reionization (z > 6), and only <4% were satellites of any other host halo during reionization. Thus, effects of cosmic reionization versus host-halo environment on the formation histories of surviving dwarf galaxies in the Local Group occurred at distinct epochs and are separable in time.
The Victorian Earthquake Hazard Map
Dan Sandiford
Tim Rawling

Dan Sandiford

and 2 more

October 07, 2022
SUMMARY This report summarises the development of a new Probabilistic Seismic Hazard Analysis (PSHA) for Victoria called the Victorian Earthquake Hazard Map (VEHM). PSHA provides forecasts of the strength of shaking in any given time (return period). The primary inputs are historical seismicity catalogues, paleoseismic (active fault) data, and ground-motion prediction equations. A key component in the development of the Victorian Earthquake Hazard Map was the integration of new geophysics data derived from deployments of Australian Geophysical Observing System seismometers in Victoria with a variety of publicly available datasets including seismicity catalogues, geophysical imagery and geological mapping. This has resulted in the development of a new dataset that constrains the models presented in the VEHM and is also is provided as a stand-alone resource for both reference and future analysis. The VEHM provides a Victorian-focussed earthquake hazard estimation tool that offers an alternative to the nationally focussed 2012 Australian Earthquake Hazard Map . The major difference between the two maps is the inclusion of active fault location and slip estimates in the VEHM. There is a significant difference in hazard estimation between the two maps (even without including fault-related seismicity) due primarily to differences in seismicity-analysis. These issues are described in the discussion section of this report, again resulting in a higher fidelity result in the VEHM. These differences make the VEHM a more conservative hazard model. The VEHM currently exists as a series of online resources to help assist those in engineering, planning, disaster management. This is a dynamic dataset and the inputs will continue to be refined as new constraints are included and the map is made compatible with the Global Earthquake Model (GEM) software, due for release in late 2014. The VEHM was funded through the Natural Disaster Resilience Grants Scheme. The NDRGS is a grant program funded by the Commonwealth Attorney-General’s Department under the National Partnership Agreement on Natural Disaster Resilience signed by the Prime Minister and Premier. The purpose of the National Partnership Agreement is to contribute towards implementation of the National Strategy for Disaster Resilience, supporting projects leading to the following outcomes: 1. reduced risk from the impact of disasters and 2. appropriate emergency management, including volunteer, capability and capacity consistent with the State’s risk profile.
Distinguishing disorder from order in irreversible decay processes
Jonathan Nichols
Jason R. Green

Jonathan Nichols

and 2 more

August 25, 2014
Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay i A → products, i = 1, 2, 3, …n governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with i ≥ 2. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time.
Real-space grids and the Octopus code as tools for the development of new simulation...
Xavier Andrade
David A. Strubbe

Xavier Andrade

and 15 more

August 18, 2014
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
The "Paper" of the Future
Alyssa Goodman
Josh Peek

Alyssa Goodman

and 10 more

January 17, 2021
_A 5-minute video demonstration of this paper is available at this YouTube link._ PREAMBLE A variety of research on human cognition demonstrates that humans learn and communicate best when more than one processing system (e.g. visual, auditory, touch) is used. And, related research also shows that, no matter how technical the material, most humans also retain and process information best when they can put a narrative "story" to it. So, when considering the future of scholarly communication, we should be careful not to do blithely away with the linear narrative format that articles and books have followed for centuries: instead, we should enrich it. Much more than text is used to communicate in Science. Figures, which include images, diagrams, graphs, charts, and more, have enriched scholarly articles since the time of Galileo, and ever-growing volumes of data underpin most scientific papers. When scientists communicate face-to-face, as in talks or small discussions, these figures are often the focus of the conversation. In the best discussions, scientists have the ability to manipulate the figures, and to access underlying data, in real-time, so as to test out various what-if scenarios, and to explain findings more clearly. THIS SHORT ARTICLE EXPLAINS—AND SHOWS WITH DEMONSTRATIONS—HOW SCHOLARLY "PAPERS" CAN MORPH INTO LONG-LASTING RICH RECORDS OF SCIENTIFIC DISCOURSE, enriched with deep data and code linkages, interactive figures, audio, video, and commenting.
Compressed Sensing for the Fast Computation of Matrices: Application to Molecular Vib...
Jacob Sanders
Xavier Andrade

Jacob Sanders

and 2 more

July 11, 2014
This article presents a new method to compute matrices from numerical simulations based on the ideas of sparse sampling and compressed sensing. The method is useful for problems where the determination of the entries of a matrix constitutes the computational bottleneck. We apply this new method to an important problem in computational chemistry: the determination of molecular vibrations from electronic structure calculations, where our results show that the overall scaling of the procedure can be improved in some cases. Moreover, our method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations, resulting in a significant 3\(\times\) speed-up in actual calculations.
Large-Scale Microscopic Traffic Behaviour and Safety Analysis of Québec Roundabout De...
Paul St-Aubin
Nicolas Saunier

Paul St-Aubin

and 2 more

July 08, 2014
INTRODUCTION Roundabouts are a relatively new design for intersection traffic management in North America. With great promises from abroad in terms of safety, as well as capacity—roundabouts are a staple of European road design—roundabouts have only recently proliferated in parts of North America, including the province of Québec. However, questions still remain regarding the feasibility of introducing the roundabout to regions where driving culture and road design philosophy differ and where drivers are not habituated to their use. This aspect of road user behaviour integration is crucial for their implementation, for roundabouts manage traffic conflicts passively. In roundabouts, road user interactions and driving conflicts are handled entirely by way of driving etiquette between road users: lane merging, right-of-way, yielding behaviour, and eye contact in the case of vulnerable road users are all at play for successful passage negotiation at a roundabout. This is in contrast with typical North American intersections managed by computer-controlled traffic-light controllers (or on occasion police officers) and traffic circles of all kinds which are also signalized. And while roundabouts share much in common with 4 and 2-way stops, they are frequently used for high-capacity, even high-speed, intersections where 4 and 2-way stops would normally not be justified. Resistance to adoption in some areas is still important, notably on the part of vulnerable road users such as pedestrians and cyclists but also by some drivers too. While a number of European studies cite reductions in accident probability and accident severity, particularly for the Netherlands , Denmark , and Sweden , research on roundabouts in North America is still limited, and even fewer attempts at microscopic behaviour analysis exist anywhere in the world. The latter is important because it provides insight over the inner mechanics of driving behaviour which might be key to tailoring roundabout design for regional adoption and implementation efforts. Fortunately, more systematic and data-rich analysis techniques are being made available today. This paper proposes the application of a novel, video-based, semi-automated trajectory analysis approach for large-scale microscopic behavioural analysis of 20 of 100 available roundabouts in Québec, investigating 37 different roundabout weaving zones. The objectives of this paper are to explore the impact of Québec roundabout design characteristics, their geometry and built environment on driver behaviour and safety through microscopic, video-based trajectory analysis. Driver behaviour is characterized by merging speed and time-to-collision , a maturing indicator of surrogate safety and behaviour analysis in the field of transportation safety. In addition, this work represents one of the largest applications of surrogate safety analysis to date.
← Previous 1 2 … 2747 2748 2749 2750 2751 2752 2753 2754 2755 Next →

| Powered by Authorea.com

  • Home