AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,104 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

A Short Guide to Using Python For Data Analysis In Experimental Physics (V3)
Nathanael A. Fortune
Rebecca Webster

Nathanael A. Fortune

and 1 more

April 01, 2026
Common signal processing tasks in the numerical handling of experimental data include interpolation, smoothing, and propagation of uncertainty. A comparison of experimental results to a theoretical model further requires curve fitting, the plotting of functions and data,  and a determination of the goodness of fit. These tasks often typically require an interactive, exploratory approach to the data, yet for the results to be reliable, the original data needs to be freely available and resulting analysis readily reproducible. In this article, we provide examples of how to use the Numerical Python (Numpy) and Scientific Python (SciPy) packages and interactive Jupyter Notebooks to accomplish these goals for data stored in a common plain text spreadsheet format. Sample Jupyter notebooks containing the Python code used to carry out these tasks are included and can be used as templates for the analysis of new data. 
Asymmetric Trust and Causal Reasoning in Blockchain-based AIs
Dr. Percy Venegas

Dr. Percy Venegas

May 30, 2018
Workshop on Blockchain Networks and Information Flow (ICCS 2018)We use genetic programming evolved networks, vector fields, and signal processing to study time varying-exposures where trust is implied (e.g. a conversion event from attention flow to financial commitment). The datasets are behavioral finance time series (from on-chain data, such as fees, and off-chain data, such as clickstreams), which we use to elaborate on various complexity metrics of causality, through the creation parametric network graphs. We discuss the related methods and applications and conclude with the notion of social memory irreversibility and value by memory as useful constructs that take advantage of the natural fact of the existence of trust asymmetries, that can be operationalized by embedded AIs that use blockchains both as the substrate of their intelligence and as social computers.   Keywords: systemic risk, behavioral finance, economic complexity, evolutionary computation, computational trust, blockchain,  cryptocurrencies, market microstructure. 
Featural-visual Indexicality and Sound-shape Congruency in a Phonologically Engineere...
Marcia S. Suzuki, M.A.

Marcia S. Suzuki, M.A.

August 24, 2021
The term uniskript was coined to refer to a class of phonologically engineered alphabets that employ visual-featural indexicality combined with sound-shape congruency to represent speech. In this working paper, I introduce the uniskript methodology, an alphabet generator technique that uses indices instead of symbols to represent the flow of speech. I refer to the Peircean theory of signs to explain the crucial semiotic distinction between uniskript and the traditional alphabets: in uniskript, an icon resembling relevant articulatory features of a given phoneme is used to index sound to shape. I also indicate how the findings in sound-symbolism were incorporated into the indices to facilitate cross-modal correspondences.  I propose that uniskript indexical nature and sensorial mappings can explain why it is so much easier to learn than symbolic and sensory incongruent alphabets. I then briefly discuss how the study of uniskript alphabets can shed some light on the role of cross-modal correspondences in learning. It can also bring a deeper understanding of the role of phonology in sound symbolism. Finally, I refer to some applications of uniskript in the teaching of literacy and in remediating reading issues and illustrate the entire concept by introducing a uniskript alphabet developed as a tool to teach pronunciation in an ESL program. keywords: uniskript, alphabets, sound symbolism, sound-shape iconicity, cross-modal congruency, phonology, second language learning, pronunciation in L2
Aged blood inhibits hippocampal function through VCA...
Guang Yang
guang.yang

Guang Yang

and 1 more

July 08, 2018
Aged blood inhibits hippocampal neurogenesis and activates microglia through VCAM1 at the blood-brain barrier Hanadie Yousef1, Cathrin J Czupalla2, Davis Lee3, Ashley Burke1, Michelle Chen4, Judith Zandstra1, Elisabeth Berber1,3, Benoit Lehallier1, Vidhu Mathur1, Ramesh V Nair5, Liana Bonanno1, Taylor Merkel1, Markus Schwaninger6, Stephen Quake4, Eugene C Butcher2,3, Tony Wyss-Coray1,3*  bioRxiv preprint first posted online Jan. 3, 2018; doi: http://dx.doi.org/10.1101/242198. Humanity has been seeking fountain of youth through the ages, and nowadays many promising findings against aging have been made public. However, with increasing life expectance, cognitive decline continues to be one of the most concerning health challenges. Aged-related neurodegeneration, typically in hippocampus, is responsible for many geriatric brain diseases such as Alzheimer’s disease. The hippocampus subserves learning and memory, and is always the first region of brain to suffer injury \citep{Castellano_2017}. Due to its vulnerability to adverse effect of aging, hippocampus has become one of the most important targets of attempts to remit aging damages to the brain. Previous studies have discovered the rejuvenation effects of young blood as it can revitalize hippocampus function in aged mice treated with plasma of young mice \citep{Castellano_2017,Villeda_2011}. Conversely, some studies also reveled that besides natural aging, hippocampus deterioration could also be driven by treatments of plasma from aged individuals\citep{Villeda_2011,Rebo_2016} . Early in 2005, an experiment through vascular anastomoses of young and aged mice has proposed the idea that old plasma relies on systemic inhibitory factors to degenerate organs, including the brain, in young mice \citep{Conboy_2005}.  A recent research carried out by \citet{Yousef_2018} in Stanford University also observed sufficient capacity of aged plasma to trigger aging phenotypes in young brains, primarily in neurogenesis suppression and microglia activation of the hippocampus. Activated microglia is known to be a chronic source of diverse neurotoxins that leads to neuronal function loss, particularly in aged brain and neurodegenerative diseases \citep{Lull_2010}. These two phenomena are considered to be cellular hallmarks of brain aging. \citet{Yousef_2018} hypothesized that deterioration of hippocampus function in aged circulatory environment is mediated by blood-brain barrier (BBB). Because BBB separates brain from blood and protects brain parenchyma from harmful factors in the circulating milieu. In their paper, \citet{Yousef_2018}explained that factors in blood affect brain cells through behaviour of brain endothelial cells (BECs) constituting the BBB.  To determine the proteins involved in age-related changes in BECs, \citet{Yousef_2018} compared the plasma proteomic differences between healthy aging control groups and identified 31 protein factors related significantly with age. Among them soluble form of Vascular Cell Adhesion Molecule 1 (sVCAM1) has performed the most strongly positive correlation with age. However, aged plasma depleted of sVCAM1 did not display obviously less detrimental effects on young brain, indicating that sVCAM1 is not the driving factor of aging phenotype. \citep{Yousef_2018} explained the high quantity of sVCAM1 as a result of high membrane bound Vascular Cell Adhesion Molecule 1 (VCAM1) because of the constitutive shedding of VCAM1 from BBB to plasma \citep{Garton_2003,SINGH_2005}\citep{SINGH_2005}. This is in line with the higher expression level of VCAM1, according to higher Vcam1 mRNA concentration detected in aged BECs compared to young ones. \citep{Yousef_2018}discovered that cultured BECs treated with aged plasma performed considerably higher level of VCAM1 than those treated with young plasma. Similar results can be observed in vivo when infusing aged plasma into young mice. VCAM1 is membrane bound on the luminal (blood-facing) side of the BBB and facilitates leukocyte tethering that leads to sustained inflammation of the brain \citep{Yousef_2018}. Meanwhile, they also discovered that VCAM1 is upregulated in response to inflammatory effects. Many studies show that aging and aged-related disease are companied by a certain degree of inflammation \citep{Pizza_2011}. In nervous system, neuroinflammation refers to the increase of activated microglia while in circulatory system it refers to vascular inflammatory changes. \citet{Yousef_2018} suggested that inflamed BECs rely on signally factors transmitting through the BBB to induce subsequent inflammatory response in brain parenchyma. And this signaling is induced by the interaction of VCAM1 and leukocyte. However at this point of their research the signals are not identified.  To determine the significance of VCAM1 in brain degeneration driven by aged plasma, \citet{Yousef_2018}deleted Vcam1 gene from young mice and then infused aged or young plasma into these mice. Results of both treatments presented a similar level of neurogenesis and equally infinitesimal quantity of activated microglia. Therefore, \citet{Yousef_2018} indicated that Vcam1 deletion is effective to eliminate unfavorable effects on hippocampus caused by aged plasma. \citet{Yousef_2018}hypothesized that Vcam1 deletion abrogates negative effects of aged plasma by interfering VCAM1 and leukocyte interaction. To test this hypothesis, they systemically administrated monoclonal VCAM1 antibody into young mice. With VCAM1 antibody treatment, the neurogenesis inhibition and microglia activation triggered by aged plasma treatment were both prevented, while the increase in VCAM1 expression remained unaffected. More excitingly, treatment of VCAM1 antibody also improved neurogenesis and reduced active microglia number in naturally aged mice. These results revealed that antibody blockage of VCAM1 could effectively mitigate adverse impact of aged plasma and could also rejuvenate aged brains. Based on the above findings, \citet{Yousef_2018} proposed a model explaining the mechanism of how aged plasma influence brain cell behaviour through VCAM1 at the BBB and how VCAM1 antibody inhibits the influences (Fig.1).  This research is a further proof of previous studies on inhibitory effects of aged blood on young brains. Inspiringly, \citet{Yousef_2018}proposed a model of the mechanism that aged plasma trigger aged-related brain damage through a specific protein at the blood-brain barrier. In traditional treatments of neurodegeneration diseases, blood brain barrier remains the biggest obstacle of therapeutic interference to the brain. Importantly, \citet{Yousef_2018} revealed the possibility to ameliorated age-related neurogenesis decline and microglia activity via noninvasive modulation of proteins at blood brain barrier. Therefore, their research provided a possible alternative therapy to combat hippocampus neurodegeneration by molecular regulation through circulate system.  The most exciting finding of this paper is that they discovered a particular protein—VCAM1 antibody that reverse age-related brain dysfunction in vivo, which has a promising future to clinical application. From the perspective of medicinal development, this VCAM1 antibody molecule has already been approved in treatments for Multiple Sclerosis (MS) and Crohn’s Disease \citep{Polman_2006}, supporting the feasibility and safety of this antibody. With multiple previous researches1-4, this is one further step towards human rejuvenation and the ‘fountain of youth’. However, many details of the proposed mechanism behind this process remain unclear.  In their proposed model of VCAM1 modulation, signals leading to neurodegeneration are considered to be induced by leukocyte and VCAM1 interactions through VLA-4 integrin \citep{Yousef_2018}. However, systemic administration of VLA-4 antibody into aged mice showed different results compared to VCAM1 antibody treatment \citep{Yousef_2018}. VLA-4 intervention only reduced active microglia and did not affect neurogenesis. This result may indicate that leukocytes bind to VCAM1 through multiple receptors or pathways. Alternatively it might suggest that the signaling is not necessarily triggered by VCAM1 via leukocyte interaction, but possibly depend on other VCAM1 behaviour. Moreover, the inhibitory factors in aged plasma as well as the signal factors through blood brain barrier involved in this model are not yet identified in this paper. As there some factors remain unknown in this mechanism, the proposed model by \citet{Yousef_2018} still needed to be verified through further researches.  \citet{Yousef_2018}constructed the linkage between VCAM1 and brain crossing the blood brain barrier, which indicates VCAM1 as potential target for age-associated brain disease treatment. This research also provided impressive evidence for medical implication for strategies to alleviate, or even reverse age-associated neurodegeneration. Although there are some questions left in this research, it is still of great importance to future academic studies and clinical application.
Moving Away From Anecdotal Responses to Questions Faculty Have Concerning Broader Imp...
Michael Thompson

Michael Thompson

May 09, 2018
Abstract Many faculty researchers, University administrators, proposal development individuals and organizations, engagement specialists, Societal Benefit Organizations (SBOs), and Societal Benefit Professionals (SBPs) have asked and indicated the need for a researched evidence-based response to the following question: What is one of the most important keys to developing broader impacts for the National Science Foundation (NSF)? This question has become especially salient for faculty submitting proposals to NSF or other agencies, foundations, and organizations with different types of broader impacts foci. Faculty know it is vital to develop broader impacts, but they do not necessarily know what the deeper meaning of broader impacts is. To truly understand the scope of broader impacts, we need to go beyond anecdotal descriptions of what others have done.  This article introduces a research-based framework for understanding, practicing, and starting to operate in a broader impacts paradigm. This response is a brief synopsis based off several works-in-progress that either have been or will be submitted for publication in peer-reviewed journals.   Brief Introduction, Background, and Methodology Many do not realize that the concept, meaning, and methodology of broader impacts represent an international phenomenon. An investigation into this phenomenon revealed that at least eighty-two percent (82%) of the countries around the world utilize a range of names, terms, or phrases (NTP’s) to describe broader impacts. Broader impacts-like NTP's were originally identified based on three overarching features.  The first feature was that the NTP had to be focused on achieving something societally desirable. The second feature was that it had to encompass a process function. The third feature was that it needed to encourage achieving a specific goal[1]. Examples of these NTP’s are found in Figure 1 (Fig.1) and organized by country, except for the European Unions (EU’s) Responsible Research and Innovation (RRI) and the Research Excellence Framework (REF). 
A Working Definition of Online Citizen Science
Cathal Doyle
Markus Luczak-Roesch

Cathal Doyle

and 9 more

May 07, 2018
Abstract Citizen science, and online citizen science, are part of a movement towards open and participatory science, where education is particularly interested due to its potential benefits such as educating learners about the scientific process, as well as the topics of their study. This research is part of a project investigating the role of online citizen science in primary school science education, and provides an understanding of both citizen science and online citizen science from the literature, and then derives working definitions, which will be the guide for our further investigations.IntroductionThere has been a movement in recent years towards open and participatory science, in an effort to make scientific research more accessible to all levels of society. Citizen science (CS), and online citizen science (OCS) aim to help with this movement \citep{Bonney_2009}, where the latter has become more and more popular in recent years \citep{Nov2011}. One area of society that has become particularly interested in OCS is that of education (especially science education) \citep{wynne2017}, where potential benefits are to use OCS to educate learners about the scientific process, and about the particular topics of real scientific projects through participation facilitated by digital technologies. However, research on how OCS relates to the formal setting of science education has received little attention so far. Furthermore, while much research spoke about OCS, no work so far seems to offer an unambiguous differentiation between CS and OCS. This article aims to close this gap, first providing an understanding of both CS and OCS from the literature, and then deriving working definitions from these understandings, which will be the guide for our investigation of online citizen science in the science education of primary school children.The remainder of this article is structured as follows. We begin by providing the background of our research project, emphasising the link to education research and teaching practice. Afterwards we describe the methodology we followed for our initial literature review from which we derived the working definitions of CS and OCS, which will be presented afterwards. Finally we give a brief outline of how this informs our ongoing research on novel ways to purposefully embed OCS for Year 3-8 students in New Zealand primary classrooms that meet the aims and intentions of the Nature of Science strand of the New Zealand Curriculum.Citizen Scientists in the Classroom: Investigating the Role of Online Citizen Science in Primary School Science Education This research is part of a larger research project that has been funded by the Teaching and Learning Research Initiative (TLRI). The TLRI is a fund initiated by the New Zealand government to "link education research and teaching practice"\cite{site}. The "Citizen Scientists in the Classroom" project explores the impact on student learning and engagement with science, incorporating OCS projects in New Zealand primary school classrooms (Year 3-8). It involves a co-constructive partnership (see Fig. 1)  between researchers at Victoria University of Wellington and primary school teachers who have been identified as advocates of science education in New Zealand, and is the first attempt to investigate the potential of OCS projects to contribute to the improvement of science education of primary-age children. 
PREreview of bioRxiv article "NRG1-mediated recognition of HopQ1 reveals a link betwe...
Sophien Kamoun

Sophien Kamoun

June 28, 2019
This is a review of Brendolise et al. bioRxiv 293050; doi: https://doi.org/10.1101/293050 posted on April 1, 2018. This paper adds to a current body of research detailing the resistance mechanism triggered by the Pseudomonas syringe pv. tomato effector HopQ1 in the model plant Nicotiana benthamiana. This plant can be used as a source of novel disease resistance genes against plant pathogens.
Product Convolution
Benedict Irwin

Benedict Irwin

March 13, 2026
If we have a random variable Z = XY, and we know the distributions for X and Y, then the distribution for Z is given by P_z(t) = ^\infty {s})}{|s|} \; ds it then seems that if X and Y are uniformly distributed between −1 and 1, that this results in the normalised distribution for Z being given as P_z(t) = {2} for two distributions between 0 and 1, then it seems that P_z(t) = -\log(t) and between 0 and r, P_z(t) = - {r} If we keep convolving the uniform distribution on the range −1 to 1, we get further distributions -{2}\\ {4}\\ -{12} \\ {48}\\ we can conclude that the product convolution of n uniformly distributed variables in this range has distribution p_z(t) = (-1)^{n+1}}{2 (n-1)!} now we might consider the determinant of a matrix A whose elements are in the range −1 to 1. Being of the form det(A)=ad − bc, we see that the random variable is given by d = AD - BC the distributions of AD and BC are the same, given by $-{2}$, so the total variable will be a convolution of these two P_d(t) = ^{1} P_{AD}(x)P_{BC}\left({-1}\right) \; dx P_d(t) = ^{1} P_{AD}(x)P_{BC}\left(x-t\right) \; dx P_d(t) = ^{1} {2}{2} \; dx this gives a piecewise distribution {4} \left( {cc} \{ & {cc} 2-{6} & y=-1\lor y=1 \\ \log (-y) (\log (y+1)-i \pi ) y-_2\left(1+{y}\right) y+_2\left(-{y}\right) y+2 y-2 (y+1) \log (y+1)+4 & -1<y<0 \\ y \log ^2\left(-{y}\right)-{6}+2 y-2 (y+1) \log (-y-1)+2 y _2\left(-{y}\right)+4 & -2<y<-1 \\ i \pi \log (y) y-_2\left({y}\right) y+_2\left({y}\right) y-2 y-\log (1-y) (\log (y) y-2 y+2)+4 & 0<y<1 \\ -_2\left({y}\right) y+_2\left({y}\right) y-2 y-\log (y-1) (\log (y) y-2 y+2)+4 & 1<y<2 \\ 4 & y=0 \\ \\ \right) integrations over this then suggest the probability the determinant of a random 2 by 2 matrix is greater than 1 is given by P_{D>1} = {48} along with P_{D>1/2} = {192}(9 + 2\pi^2 -6\log(20 + 6 \log(2)^2) For matrices whose elements are either −1 or 1, the probability distribution of a product of two elements is the same as one element P_x(t) = {2} for D = AD − BC, then this is just the convolution of two of these P_d(t) = ^\infty {2}{2} \; dx \\ P_d(t) = {4}(\delta(t-2) + 2 \delta(t) + \delta(t+2)) A nice concept is a generating function of distributions for the convolution of n distributions. If we write f(q,X) = (X *_M X)q^2 + (X *_M X *_M X)q^3 + \cdots we can differentiate n times with respect to q and set q to zero, then this gives us the distribution function for n convolved distributions. For example ^\infty {2} {k!}q^{k+1} = -{2|t|^{q}} gives the product-convolution generating function for variables drawn from a uniform distribution on −1 to 1. Then for example {2!} {dq^2} -{2|t|^{q}} \Bigg|_{q \to 0} = {2}\log(|t|) which is the product-convolution of 2 distributions.
Zeta(3) Glasser Master Polynomial
Benedict Irwin

Benedict Irwin

March 13, 2026
We have that \zeta(3) = {2}\int_0^\infty x^2 {(\exp(x)-1)} \; dx but through the Glasser Master theorem we can also construct \zeta(3) = {2}\int_0^\infty {x^2} {(\exp(\left|{x}\right|)-1)} \; dx \\ \zeta(3) = {2}\int_0^\infty {(-1+x^2)^2} {(\exp(\left|{(-1+x^2)}\right|)-1)} \; dx \\ \zeta(3) = {2}\int_0^\infty {P^2_{n-1}} {(\exp(\left|{P_{n-1}}\right|)-1)} \; dx and so on, where we have P_0(x) = x \\ P_1(x) = -1+x^2 \\ P_2(x) = -2x+x^3 \\ P_3(x) = 1-3x^2+x^4 \\ P_4(x) = 3x-4x^3+x^5 \\ P_5(x) = -1+6x^2-5x^4+x^6 \\ P_6(x) = -4x+10x^3-6x^5+x^8 \\ the coefficients coincide with those of A102426. These can be defined as K_n(x) = \Bigg[\; 0, & n=0 \\ 1, & n=1 \\ xK_{n-1}(x)-K_{n-2}(x) & then P_n(x) = K_{n+2}(x) we can generate the polynomial directly as K_n(x) = ^n (-1)^{1+k+\lceil n/2 \rceil}{\lfloor (n-1)/2-k \rfloor} x^{2k + [n\%2=0]}
PREreview from the Computational Biology & Gene Regulation group at NCMM
Anthony Mathelier

Anthony Mathelier

April 30, 2018
This is a preprint review from our group's journal club. We reviewed the following manuscript: CREAM: Clustering of genomic REgions Analysis Method
Beta Transform
Benedict Irwin

Benedict Irwin

March 13, 2026
ABSTRACT I investigate the idea of a “beta transform”, based on the beta function B(a, b), and is somewhat analogous to the Mellin transform and Ramanujan master theorem. The main result so far is the form of an expansion. THE TRANSFORM Consider the integral transform [f](a,b)=\int_0^1 t^{a-1}(1-t)^{b-1} f(t) \; dt we then have some examples [1](a,b)=B(a,b) \\ [x^k](a,b)=B(a+k,b) \\ [(1-x)^k](a,b)=B(a,b+k) \\ [e^x](a,b)=B(a,b)_1F_1(a,a+b,1) \\ [e^x](a,b)=B(a,b)_1F_1(a,a+b,-1)\\ [\log(1-x)](a,b)=B(a,b)(\psi_0(b)-\psi(0,a+b))\\ [\log(1+x)](a,b)=B(a+1,b)_3F_2(1,1,1+a;2,1+a+b;-1) [{1-x}]=B(a,b-1)\\ [{1+x}]=B(a,b)_2F_1(1,a;a+b;-1)\\ [{2-x}]=B(a,b)_2F_1(1,b;a+b;-1)\\ [{x}]=B(a-1,b)\\ We can explore the analogy to Ramanujan’s Master theorem, which states [f]=\int_0^\infty x^{s-1}f(x)\;dx = \Gamma(s)\phi(-s) such that f(x) = ^\infty \phi(k){k!} Is there a statement about [f](a,b)=\int_0^1 t^{a-1}(1-t)^{b-1} f(t) \; dt = B(a,b) \Upsilon(a,b) INVERSE TRANSFORM If a function can be expressed as f(a,b)=^\infty ^\infty c_{kl}B(a+k,b+l) then the inverse transform might look something like ^{-1}[f(a,b)]= \int_S K(a,b,x) f(a,b) \; dadb but we know that ^{-1}[B(a+k,b+l)]= x^k (1-x)^l so formally speaking ^{-1}[f(a,b)]= ^\infty ^\infty ^{-1}[c_{kl}B(a+k,b+l)] ^{-1}[f(a,b)]= ^\infty ^\infty c_{kl}x^k (1-x)^l DEVELOPMENT In light of this, we can attempt to find a series for the Υ featured in [f](a,b)=\int_0^1 t^{a-1}(1-t)^{b-1} f(t) \; dt = B(a,b) \Upsilon(a,b) To make this a series of beta functions, we can write the expansion of the form \Upsilon(a,b)=c_{00}1 + c_{10}{a+b} + c_{01}{a+b} + c_{11} {(a+b)(a+b+1)} + c_{12}{(a+b)(1+a+b)(2+a+b)} + c_{21}{(a+b)(1+a+b)(2+a+b)}+ \cdots If this is the case, we then have B(a,b)\Upsilon(a,b) = c_{00}B(a,b) + c_{10}B(a+1,b) + c_{01}B(a,b+1) + c_{11} B(a+1,b+1) + c_{12}B(a+1,b+2) + B(a+2,b+1)+ \cdots which gives the inverse transform ^{-1}[B(a,b)\Upsilon(a,b)] = c_{00} + c_{10}x + c_{01}(1-x) + c_{11}x(1-x) + c_{12}x(1-x)^2 + c_{21} x^2(1-x) + \cdots THE EXPANSION The expansion for Υ appears to be a hypergeometric style expansion. We can rewrite the general form \Upsilon(a,b)=^\infty ^\infty c_{kl}{(a+b)_{k+l}} where, (a)k denotes a Pochhammer symbol. From this we have a full description of the inverse process [f](a,b)=\int_0^1 t^{a-1}(1-t)^{b-1} f(t) \; dt = B(a,b)^\infty ^\infty c_{kl}{(a+b)_{k+l}} then f(x) = ^\infty ^\infty c_{kl}x^k(1-x)^l = ^\infty ^\infty c_{kl}x^k^l {m}(-1)^m x^m The inverse integral transform must then satisfy \int_S K(a,b,x)B(a+k,b+l) \;da\;db = x^k(1-x)^l THE INCOMPLETE BETA TRANSFORM we could also consider the transform [f]=\int_0^z t^{a-1}(1-t)^{b-1} f(t)\; dt = B_z(a,b)\Upsilon(a,b,z) where obviously [t^k]=\int_0^z t^{a-1}(1-t)^{b-1} t^k\; dt = B_z(a+k,b)\\ [(1-t)^k]=\int_0^z t^{a-1}(1-t)^{b-1} (1-t)^k\; dt = B_z(a,b+k) RELATIONSHIP TO MELLIN TRANSFORM [f](a,b)=\int_0^1 t^{a-1}(1-t)^{b-1} f(t) \; dt if we substitute t = x/(1 + x), then dt = dx/(1 + x)², when t = 0, x = 0, when t = 1, x = ∞ [f](a,b)=\int_0^\infty }{(1+x)^{a-1}}\left(1-{1+x}\right)^{b-1} f({1+x}) \; {(1+x)^2} \\ [f](a,b)=\int_0^\infty }{(1+x)^{a+b}}f\left({1+x}\right) \; dx for example [t^k] = \left[{(1+x)^{k+a+b}}\right] = B(a+k,b) \\ [(1-t)^k] = \left[\left(1-{1+x}\right)^k{(1+x)^{a+b}}\right] = B(a,b+k) APERY’S CONSTANT We can write \zeta(3) = {2} \int_0^1 x^2 (1-x)^4 {e^{{1-x}}-1}\; dx = B(3,5)(3,5) this is the beta transform of this function with a = 3 and b = 5. Then we would like to find a relationship in the coefficients of the function {e^{{1-x}}-1} = ^\infty ^\infty c_{kl}x^k(1-x)^l = ^\infty ^\infty ^l c_{kl} {m}(-1)^m x^{m+k} such that we may transfer this to a sum of beta functions through the transform. Equally we can write \zeta(3) = ^\infty {k^2} this would suggest the integral \int_0^1 x^{-1}^\infty {k^2} \; dx = \int_0^1 _2(x)}{x} \; dx = \zeta(3) which is true. We could also write \zeta(3)=^\infty {k^3}B(1+k,2) this implies \int_0^1 (1-x)\left[2_3(x)+ 3_2(x)-\log(1-x)\right]\;dx = \zeta(3)
Anatomy of a Social  Computer        
Markus Luczak-Roesch
ratinati

Markus Luczak-Roesch

and 2 more

April 21, 2018
AbstractCan we develop a generic socio-technical computing device that lets emergent human collectives determine the computational program by their real-time inputs? In this conceptual article we present the system architecture of a novel approach to facilitate socio-technical computation beyond what the state-of-the-art in human-agent collectives and social machines considered so far. The system responds to bursts of activity around a topic by spinning up tasks made up from the observed content and by collecting instructions on those tasks from the general public on the Web. It is designed in an open fashion (open standards, open access) to embrace the social in computing as in the theories of Max Weber, which opens a variety of new challenges and future research directions as laid-out in this article. Finally, we introduce a new metaphor for what we regard one of the current grand challenges for the Computer Science discipline resulting from this kind of work.IntroductionAn increasing amount of voices in the research community states that “what goes viral” is heavily impacted by the commercial use of social media, and that the modeling of virality based on retrospective observation of successful campaigns does not bring us any nearer to predict what will go viral in the future \citep{Cebrian_2016}. One can observe that the affordances of today’s social media systems and research in this context are largely focused on sharing of information on one or at most a set of proprietarily linked platforms. Tools or environments to empower human collectives to actively shape their action and to form procedural knowledge around an actual event or topic in real-time are largely missing to date; apart from very few examples like IFTTT (https://ifttt.com), that are mainly focused on linking the personal information environment of the individual and lack openness to be suited for emergent collective action. Consequently, the ultimate vision of autonomously operating human-agent collectives \citep{Jennings_2014} or emergent social machines  \citep*{Hendler_2010}, incorporating the general public on the Web, cannot be seen in practice yet.We argue that this is largely because research and development in the space of social media is currently in a retrospective trap. Our views to the interplay of the technical and the social on the Web remain highly descriptive and the constructive dimension is limited. To fill this gap, we demonstrated an early stage prototype of a system that adds the formation of procedural knowledge to any social media system it is connected with \citep{Luczak_Roesch_2016}. In this article, we give detailed account to the principled architecture of the next iteration of this system, which reacts upon activity bursts and lets human participants perform low-level actions on content that they regard meaning- and purposeful in the context of a real-world event. The human input that is captured by this Social Computer forms the formal program running on it, while the technical backend simply facilitates that information can flow across platform boundaries to reach further human participants. Being fully based on the principles of the Web architecture, the system allows open access to the procedural knowledge that is created by the input from the crowd that engages with the system via one of its many instruction interfaces. This also enables the development of custom views to the computer’s state and tailored instruction interfaces. In the remainder of this this article we will give an overview of the current state of the literature on emergent socio-technical systems on the Web, such as Social Machines as well as human-agent collectives. Then we present our principled architecture of a Social Computer and describe use cases to illustrate how an instantiation of it works. In the end we discuss a number of research challenges arising from the rigorous openness of this novel computing system to human input. We also introduce a metaphor for what we regard one of the grand challenges for Computer Science in the socio-technical age.Emergent Socio-technical systems on the WebThe Theory and Practice of Social MachinesWhile first mentioned around 2000 by \citet*{m2000},  a more formal account to Social Machines has not been given until recently when various researchers have begun to investigate the entire spectrum of what has been abstractly promised as a novel computing paradigm. The work on this project can be roughly divided into three main work areas: 1) observing socio-technical systems to understand the interplay as well as micro and macro effects of human and machine coexistence; 2) devising novel technologies for decentralised social Web applications; 3) mapping out social, moral and ethical issues as well as principles of the World Wide Web today and in the prospected future. The work presented in this article is heavily related to the observational work on Social Machines, which so far has been either large scale and quantitative or very small scale qualitative work and can be seen as the foundation for the few theories about Social Machines that have been established to date. The first theory that came out of this puts individual systems such as Twitter, facebook, reddit, Zooniverse or Mechanical Turk at the centre of the consideration \citep*{Smart_2014}. By classifying the socio-technical properties of those systems (e.g. incentive mechanisms, information sharing capabilities or generally system goals) the approach seeks to enable system developers to imitate and adapt particular patterns of those systems in order to build new Web-based participatory systems most successfully. An alternative to this system oriented viewpoint is the work on narrative structures about purposeful collective processes that can range across the boundaries of individual systems \citep{Tarte_2015,Murray_Rust_2015}. Focusing on communities and the evolution of sociality within those communities, this work has leveraged archetypes as the fundamental theory of Social Machines. These two qualitative and small scale approaches are complemented by the information-centric view to Social Machines \citep{Luczak_Roesch_2015,Luczak_Roesch_2015a,Luczak_Roesch_2018}. In contrast to the classification work, but in-line with ideas of archetypal narratives, this approach assumes Social Machines being the emergent output of human activity rather than any engineered input. As retrospective approaches these three individual lines complement each other well to allow for a multiperspective classification of socio-technical processes on the Web. The quantitative approach can be used to sample relevant subsequences of user interactions to further investigate those qualitatively to give detailed account to narrative structures.Our approach presented in this article embeds the information-centric approach to Social Machines to facilitate system-agnostic detection of activity bursts as well as content filtering. However, we fundamentally change the focus of our theoretical consideration of Social Machines from the retrospective viewpoint to the constructive anticipation, planning and execution of purposeful collective action. Agents, Interactions and Social ProtocolsHuman-agent collectives are coming from a multi agent systems (MAS) angle to tackle the challenge of coordinating collective action in an open and decentralised environment. This angle heavily emphasises the role of economic principles in autonomous systems as well as dedicated interaction protocols that govern “how the agents’ actions translate into an outcome, the range of actions available to the participants, and whether the interactions occur over steps or are one-shot” \citep{Dash_2003}. Most recent work in this area widens the economic view to incentivisation slightly, to account for the diversity of motivations for different people in different situations \citep{Jennings_2014}. However, HACs still focus on solving fixed tasks with dedicated goals that are managed in dedicated applications (e.g. citizen science platforms or digital disaster response services). With our work, instead, we seek to let even the task design and goal setting arise from human activity only and to allow for composing multiple systems to contribute to the problem solving, an approach that has also been taken by other work that comes from a similar angle but still relies on predefined interaction models, social protocols or executable specifications \citep{Ahmad_2013,Giunchiglia2010,f2013,Murray_Rust_2014,Chopra_2016}.We seek to further expand this idea of task emergence and reduce even the interaction protocols down to a set of most fundamental atomic instructions that allow the formation of arbitrary process flows involving the interfacing systems and the reached human participants. This upgrades the role of flexible low-level interaction as opposed to fixed sets of algorithmic rules, a principle that has already been the foundation of our modern interactive computing \citep*{Wegner_1997} but now seems to get lost when we build on agents with fixed interaction protocols of a high level of abstraction. With the Social Computer human participants shall ultimately get the facility to formulate their own interaction protocols composed of sequences of atomic instructions \citep{Luczak_Roesch_2015b}.Collective Intelligence, Human Computation and CrowdsourcingOur approach differs from the typically coordinated approach in collective intelligence, human computation and crowdsourcing \citep{Malone_2009,Woolley_2010,Quinn_2011,Kittur_2013}. Research in these areas commonly calls for methods to engineer the way a human collective is going to perform a pre-defined task \citep{Minder_2012,Minder_2012a} and relies on dedicated crowdsourcing platforms \citep{r2015}. We, instead, want to expose the intelligence that lies in accumulated activities of human users on the Web, while minimising the presuppositions about the tasks to be performed as well as the communities or systems in which they take part.Such “loosely knit coordinated actions” \citep{Lee_2015} have recently been highlighted as an area of increasing importance for research on computer supported collaborative work (CSCW) as well as the role of activity sequences \citep{Keegan_2016}. We contribute to this line of research by introducing a system to capture and support emergent coordinated action that is also freely available for adaptation and further development by other researchers.Principled Architecture of a Social ComputerWe are now going to present the generic system architecture underlying the A1, our first prototype of a Social Computer as depicted in Figure 1.
Convolution of Uniform Distributions
Benedict Irwin

Benedict Irwin

March 13, 2026
If we take the sum of two variables from uniform distributions, the distribution of the sum is not uniform at all. This is because the new distribution is the (normalised) convolution of the old ones. If we take the uniform distribution from 0 to 1, this can be expressed using the Heaviside step function as _0^1(x) = \Theta(x)-\Theta(x-1) the convolution of two of these is _0^1(x) * _0^1(x) = (x-2)\Theta(x-2) - 2 (x-1)\Theta(x-1) + x\Theta(x) we can keep convolving the original distribution with the old ones, and the results look more and more Gaussian like. A neater way to express these distributions is through their Laplace transform. If *n[Uniform₀¹](x) is the convolution of n distributions, Then we have [*^1[_0^1](x)] = (e^s-1)^2}{s^1} \\ [*^2[_0^1](x)] = (e^s-1)^2}{s^2} \\ [*^3[_0^1](x)] = (e^s-1)^3}{s^3} \\ [*^n[_0^1](x)] = (e^s-1)^n}{s^n} \\ The terms in the series expansion for $(e^s-1)^n}{s^n}$ are interesting. The mth term, is given by an expression (1)/Q(1) \\ (-n)/Q(2) \\ (-n^2-n^3)/Q(3) \\ (-2 n + 5 n^2 + 30 n^3 + 15 n^4)/Q(4)\\ (2 n^2 - 5 n^3 - 10 n^4 - 3 n^5)/Q(5)\\ (16 n - 42 n^2 - 91 n^3 + 315 n^4 + 315 n^5 + 63 n^6)/Q(6)\\ (-16 n^2 + 42 n^3 + 7 n^4 - 105 n^5 - 63 n^6 - 9 n^7)/Q(7)\\ (-144 n + 404 n^2 + 540 n^3 - 2345 n^4 + 840 n^5 + 3150 n^6 + 1260 n^7 + 135 n^8)/Q(8)\\ (144 n^2 - 404 n^3 + 100 n^4 + 665 n^5 - 448 n^6 - 630 n^7 -180 n^8 - 15 n^9)/Q(9)\\ (768 n - 2288 n^2 - 2068 n^3 + 11792 n^4 - 8195 n^5 - 8085 n^6 + 8778 n^7 + 6930 n^8 + 1485 n^9 + 99 n^10)/Q(10) where the Q(n) appear to be OEIS A053657. One question we could ask is, ’what do the fractional distributions look like?’. If we use Post’s inversion formula to get an approximation, we get a seemingly well behaved distribution starting from 0 with a long right tail for n = 3/2.
Impact of Artificial Light at Night on Bird Migration
Emily Hansen
Cheng Ma

Emily Hansen

and 4 more

May 10, 2021
ABSTRACT Millions of birds are killed annually as a result of collisions with buildings or exhaustion from being disoriented and trapped by intense artificial light (Crawford and Engstrom 2001). The problem is especially pronounced in urban areas, during migration season, and during times when anomalously large amounts of man-made light are emitted at night. Previous research has shown that there is an association between light and bird flight paths at low spatio-temporal resolution (La Sorte et al., 2017) as well as at a very granular spatial resolution during specific temporal events (Van Doren et al., 2017). However, there is a notable lack of research addressing neighborhood-scale flight and death patterns in urban areas. Here we develop statistical and spatial analyses of the relationship between reflectivity as a proxy for migratory birds and photogrammetrically mapped light intensity levels at a high spatio-temporal resolution in Manhattan. From there, we aim to correlate bird death counts at specific buildings to these increased light levels. The findings of this project demonstrate no conclusive positive or negative correlation between reflectivity and building brightness, but do suggest variation at a local scale and clear temporal patterns in aggregate.
Detection of polluting plumes ejected from NYC buildings
Ben Steers
JKtours

Ben Steers

and 5 more

April 15, 2019
As part of the urban metabolism, city buildings consume resources and use energy, producing environmental impacts on the surrounding air by emitting plumes of pollution. Plumes that have been observed in Manhattan range from water vapor emitted from heating and cooling systems’ steam vents to CO2 and dangerous chemical compounds (e.g. ammonia, methane). City agencies are interested in detecting and tracking these plumes as they provide evidence for signs of urban activity, cultivation of living and working spaces and can support the provision of services whilst monitoring environmental impacts. The Urban Observatory at New York University’s Center for Urban Science and Progress (CUSP-UO) continuously images the Manhattan skyline at 0.1 Hz, and day-time images can be used to detect and characterize plumes from buildings in the scene. This project built and trained a deep convolutional neural network for detection and tracking of these plumes in near real-time. The project created a large training set of over 1,100 actual plumes as well as sources of contamination such as clouds, shadows and lights, and applied the relevant network architecture for training of the model. The trained convolutional neural network was applied to the archival Urban Observatory data between two time periods: 26th October-31st December 2013 and 1st January-13th March 2015 to generate detections of building plume activity during those time periods. Buildings with high plume ejection rates were identified, and all plumes could be classified by their color (i.e. carbon vs water vapor). The final result was a detection of plumes emitted during the time periods that the dataset spans.
Automated Detection of Street-Level Tobacco Advertising Displays
CUSP capstone manager Federica Bianco
Charlie Moffett

Federica Bianco

and 8 more

May 10, 2021
\cite{products} 
Mellin Transforms of Products of Trigonometric Functions
Benedict Irwin

Benedict Irwin

March 13, 2026
I am interested in integrals of the form \int_0^\infty x^{s-1}\sin(a x)\sin(b x) \sin(c x) \cdots\;dx Is there a general rule? Some examples with 4 terms: \int_0^\infty x^{s-1}\sin(1 x)\sin(2 x) \sin(3 x) \sin(4 x)\cdots\;dx = -8^{s-1} 15^{-s}(-12^s+15^s+20^s)\cos\left({2}\right)\Gamma(s) \\ \int_0^\infty x^{s-1}\sin(2 x)\sin(3 x) \sin(4 x) \sin(5 x)\cdots\;dx = -8^{s-1} 105^{-s}(-60^s+84^s+105^s+140^s-420^s)\cos\left({2}\right)\Gamma(s)\\ \int_0^\infty x^{s-1}\sin(3 x)\sin(4 x) \sin(5 x) \sin(6 x)\cdots\;dx = -8^{s-1} 45^{-s}(-20^s-30^s-36^s-45^s-60^s+90^s+180^s)\cos\left({2}\right)\Gamma(s) these make it look like there is a pattern, and it may be to do with the factors of the constants in the arguments of the sin functions. Let’s define (s)=\cos\left({2}\right)\Gamma(s)\\ (s)=\sin\left({2}\right)\Gamma(s) and define the Mellin-Sine product (a,b,c,d,\cdots)=\int_0^\infty x^{s-1}\sin(ax)\sin(bx) \sin(cx) \sin(dx)\cdots\;dx Then we get 2^0(1)/(s) =& 1 \\ 2^1(1,2)/(s) =& 1 - 3^{-s} \\ 2^2(1,2,3)/(s) =& 2^{-s} + 4^{-s} - 6^{-s} \\ 2^3(1,2,3,4)/(s) =& -6^{-s}-8^{-s}+10^{-s} \\ 2^4(1,2,3,4,5)/(s) =& 1+3^{-s}+5^{-s}-11^{-s}-13^{-s}+15^{-s} \\ 2^5(1,2,3,4,5,6)/(s) =& 1+3^{-s}-2\cdot7^{-s}-11^{-s}+17^{-s}+19^{-s}-21^{-s}\\ 2^6(1,2,3,4,5,6,7)/(s) =& 2 \cdot 4^{-s} + 6^{-s} + 8^{-s} - 12^{-s} -14^{-s} - 18^{-s} + 24^{-s} + 26^{-s} - 28^{-s} there is something interesting going on. If we do this for primes, to make sense of any factorisation we get 2^0(2)/(s) =& 2^{-s} \\ 2^1(2,3)/(s) =& 1-5^{-s} \\ 2^2(2,3,5)/(s) =& 4^{-s}+6^{-s}-10^{-s} \\ 2^3(2,3,5,7)/(s) =& 1-11^{-s}-13^{-s}+17^{-s} \\ 2^4(2,3,5,7,11)/(s) =& 2^{-s}-6^{-s}+10^{-s}+12^{-s}-22^{-s}-24^{-s}+28^{-s}\\ 2^5(2,3,5,7,11,13)/(s) =& 1+3^{-s}-7^{-s}-9^{-s}+19^{-s}-23^{-s}-25^{-s}+35^{-s}+37^{-s}-41^{-s}\\ we can also try 2^0(1)/(s) =& 1 \\ 2^1(1,2)/(s) =& 1-3^{-s} \\ 2^2(1,2,3)/(s) =& 2^{-s} + 4^{-s} - 6^{-s} \\ 2^3(1,2,3,5)/(s) =& 3^{-s} - 7^{-s}-9^{-s}+11^{-s} \\ 2^4(1,2,3,5,7)/(s) =& 2^{-s} + 10^{-s} - 14^{-s} -16^{-s} + 18^{-s}\\ 2^5(1,2,3,5,7,11)/(s) =& 1 - 3^{-s} - 5^{-s} + 7^{-s} + 9^{-s} -13^{-s} - 21^{-s} + 25^{-s} + 27^{-s} - 29^{-s}\\ this appears to relate to the expansion of products of sines, if we convert the sin terms to complex exponentials and expand, then if (x)=e^{i x} we have 2^1 \sin(a x) = i((-ax)-(ax)) 2^2 \sin(a x)\sin(b x) = -(-ax-bx)+(ax-bx)+(-ax+bx)-(ax+bx) in general it seems {i^n}^n \sin(a_k x) = ^1 ^1 \cdots ^1 (-1)^{k_1+k_2+\cdots + k_n} e^{(-1)^{k_1}ia_1 x + (-1)^{k_2}ia_2 + \cdots + (-1)^{k_n}ia_n} ^n \sin(a_k x) = {2^n}^1 ^1 \cdots ^1 (-1)^{^n k_n} e^{ix^n(-1)^{k_l}a_l} we can use the Mellin transform of an exponential \int_0^\infty x^{s-1} e^{i a x} \; dx = (-i a)^{-s} \Gamma(s) and write the Mellin transform of the product of sines as \int_0^\infty x^{s-1}^n \sin(a_k x) \; dx = {2^n} ^1 ^1 \cdots ^1 (-1)^{^n k_n} \left(-i ^n(-1)^{k_l}a_l\right)^{-s} }{i^n \Gamma(s)}\int_0^\infty x^{s-1}^n \sin(a_k x) \; dx = ^1 ^1 \cdots ^1 (-1)^{^n k_n} \left(^n(-1)^{k_l}a_l\right)^{-s} this now makes sense of results like 2^5(1,2,3,5,7,11)/(s) = 1 - 3^{-s} - 5^{-s} + 7^{-s} + 9^{-s} -13^{-s} - 21^{-s} + 25^{-s} + 27^{-s} - 29^{-s} because 1 + 2 + 3 + 5 + 7 + 11 = 29. And the other numbers must be generated by summations of the inputs. If we wanted to represent something such as the Riemann zeta function in this way (or some truncation of it) \zeta(s) = 1^{-s} + 2^{-s} + 3^{-s} + 4^{-s} + 5^{-s} + \cdots we would then need to find the set of integers such that, all the combinations of the signed sums of the numbers cancel to precisely that series. Consider the sets of n numbers that then form the partial series for the Riemann Zeta function. We may need combinations of integrals, or to use cos functions instead of sin fucntions. [1],. We can also insert fractions to some extent, [1/2, 1, 2]→[−1, 3, 5, −7]... It seems that {\Gamma(s)}\int_0^\infty x^{s-1}^n e^{-a_kx} \; dx = \left(^n a_k\right)^{-s} this gives the famous relationship {\Gamma(s)}\int_0^\infty }{e^x-1} \; dx = \zeta(s) but there are more than one way to make the sum. We had {e^x-1}= e^{-x} + e^{-2x} + e^{-3x} + \cdots and the integral over this made the sum 1−s + 2−s + 3−s + ⋯. We can consider {e^x-1}{e^x-1} = e^{-x} + 2e^{-2x} + 3e^{-3x} + 4e^{-4x} + \cdots which gives {\Gamma(s)}\int_0^\infty e^x}{(e^x-1)^2} \; dx = \zeta(s-1) and in general we get {\Gamma(s)}\int_0^\infty ^k A(k,l)e^{l x}}{(e^x-1)^{k+1}} \; dx = \zeta(s-k) where A(k, l) are Euleriean numbers. We can then also write things like {\Gamma(s)}\int_0^\infty x^{s-1}^\infty d(n)e^{- n x} \; dx = \zeta^2(s)
The Clean Plate Sign
Thomas F Heston

Thomas F Heston

August 25, 2023
Hospitalized patients, upon admission, often have a degree of anorexia which gradually resolves as their medical condition improves. Thus, a quick way to assess the overall improvement of hospitalized patients is to look at their plate after breakfast when rounding. Patients with a clean plate after eating their full meal often are close to or ready to be discharged home.
КИБЕРНЕТИЧЕСКАЯ РЕВОЛЮЦИЯ И ШЕСТОЙ ТЕХНОЛОГИЧЕСКИЙ УКЛАД   
Leonid Grinin

Leonid Grinin

April 04, 2018
Л. Е. Гринин, А. Л. Гринин Исследование выполнено при поддержке РГНФ (проект № 14-02-00330).
Psuedo-Convolution
Benedict Irwin

Benedict Irwin

March 13, 2026
If we have a function which is defined as the convolution
Why A General 45% Suicide Attempt Rate For Transgender Women Is Mathematically And O...
Hontas Farmer

Hontas Farmer

April 02, 2018
An oft repeated statistic is that 45% of transgender people attempt suicide at some time in their lives.  A simple spread sheet calculation shows that this probably can't be the case given the observed increase in the number of transgender people.  Something else must have been going on with the particular study that is often cited (and misquoted) for that statistic.   If that statistic is generalize-able to the whole transgender population then over a 50 year period the transgender population shrinks by half.\ref{921809}   This is the opposite of the observed trend.  Therefore it is mathematically impossible for that number to generalize beyond the sample in the cited study. 
 LIFE AND NATURAL SELECTION OF COMPLEX BIOCHEMICAL REACTIONS    
Minas Sakellakis

Minas Sakellakis

March 24, 2018
LIFE AND NATURAL SELECTION OF COMPLEX BIOCHEMICAL REACTIONS ABSTRACT Here we discuss the concept that life has to do with the evolution and survival of the most stable and fittest combinations of chemical reactions over time. In this case, regardless of the initial conditions, the result will be similar due to selection. Once organic chemistry comes into play, the spatial complexity of the interactions became too enormous for equillibrium. In addition, if one excludes our perspective biases (forcing us to divide into individual organisms, systems, organs) then life's reactions as a whole seems to be more about disorder than order. The final resulting reactions will appear to have survival and self-sustaining capacities but this might be more of a self-fulfilling prophesy if the observers are exactly the resulting reactions.     ARTICLE When somebody is studying the phenomenon of viruses, he can see that when viruses are not coming in contact with a host organism, they are only considered a sum of chemical compounds that do not necessarily fulfill the criteria to be considered alive. While on the other hand they start reacting with a host, or in other words they start making chemical reactions with the compounds of the host, they become alive. The same thing happens with prions, which are proteinaceous compounds that while they react with proteins of the host, they become alive in a way. So a simple chemical reaction, while happening, is the simplest form of life, or the sparkle of life. This means that the superior organisms as well as all organisms are summations of chemical reactions. What happens now when they die? There is a disorder in a system of reactions (for example brain necrosis, which means that in a large number of neural cells there is a defect in the reactions supposed to be normally happening there) that leads to a cascade of disorders in other reactions and then in others and so on. The final result is that there is a defect in the whole body, transmitted in a chain reaction way. What is the difference between a man that is alive and a man that is dead? In both cases the body is consisted from similar elements and compounds. But in the first case these compounds are reacting with each other and the structure of the body changes every moment. In the second case the chemical reactions of the body are lead to an equilibrium. The majority of scientists speculate that life was originated from a single cell, which was the first cell on earth. This composed the first thing that was a form of life. The evolution of this cell had as a result the formation of life the way we know and see today. A problem with this idea is that if we had just a single cell in earth and outside of it there was nothing, then not only this would not lead to the formation of more complicated forms of life, but this single cell soon would be dead because of lack of food.  In the beginning, life on earth was more simple than today. This means that there was a system (network) of chemical reactions that gave its place to a more complicated one, and the system was getting more and more complicated, with more reactions happening. This sounds a bit strange because if a system of chemical reactions does not get energy from outside, leads to an equilibrium state.   Question: Can systems of primordial and inorganic chemical reactions with the help of external energy avoid chemical equilibrium and go towards a constantly increasing complexity state?   If you have a large number of initial substrates and they are reacting with other bi-directly, then the number of substrates will be increasing over time. Additionally, at the time that organic molecules with different stereochemistries will be formed, then the possibility of equilibrium will be virtually vanished, as now the possible ways of molecular interactions would be greatly increased. In fact, after some time, only organic-based reactions will be present and selected, because all the others would be lost in equilibrium.   Complex organic stereochemistry doesn’t reach equillibrium state easily due to the variability of possible isoforms and thus, everytime they were created, they persisted and survived, adding to the chemical systems complexity. Additionally, every time they reacted with other organic or inorganic material (eg water, CaCO3 etc), they corrupted the other materials, adding to stereochemical complexity, and thus constantly adding novel material into the available for life chemical machinery. In a similar way that the prions corrupt the chemistry of host organisms. This constantly increases the organic stereochemical reservoir. This can in theory can undergo evolution and selection of the most sustainable chemical systems and theoretically eventually create amazingly more and more sustainable complex chemical systems such as ourselves or the other living beings.   In conclusion, we see that a perpetually increasingly complex system of organic chemicals with infinite stereochemical variations can easily be created, provided there is a source of external energy in the system. As a result of this complex system, nucleic acids will be formed (inevitably), proteins, as well as membranes. Thus, the latter are both not necessarily the starting point of life.   Question: What other forces will act on this primordial chemical system, adding to non-equillibrium and determining its fate in the long term? 1) Hydrophobicity (hydrophobic bonds, spatial configuration, separation and isolation of chemical systems, membranes, etc. 2) Another crucial factor is the property of some molecules to strongly adhere to each other, or to adhere to membranes. (In fact, if you put living cells and dead cells in a flask, then you can sort them easily because only the living ones will strongly adhere to the walls). Sticky reactions will eventually prevail and become the basis for further chemical complexity, because the chemical compounds will not diffuse around and lead to dead ends. This will make the process multifocal rather than diffuse, enhancing its ability to thrive. To see the importance of stickiness, take for instance the sponges. Recent studies has shown that they were one of the first organisms on earth, along with corals. They don’t seem quite like the other animals. In fact, I would say that they are something in between, more like random chemical systems. However, the strong adhesions between molecules (as well as multiple other factors) in sponges makes those systems sustainable over time. In fact, they were created because they were not destroyed. They can sustain themselves for millennia. The same thing happens with corals. These systems could serve as something like “chemical labs” performing chemical experiments for thousands of years before they die. Any chemical novelty that can sustain itself will survive and will be selected. 3) In a chaos of chemical reactions, those with some kind of repeatability and periodicity will have an advantage and not lead to a dead end as will be able to continue happening in the long term. 4) Also, the reactions with the ability to promote their own existence would prevail and continue to exist, in a process which is a kind of natural selection and survival of the fittest reactions. For instance, if a process can make numerous copies of critical chemical compounds then it will have an advantage because it will be continuously over-represented in the chemical system. Question: How can chemical reactions like that, which occur in a random way, lead to the formation of the structures we see and perceive as animals, plants, organisms, etc. Why don’t we just see a random soup and mixture of gasses and fluids?   If you consider life as a WHOLE (without dividing it into species organisms, etc), you get a sum of just chemical reactions. In other words, if you remove human biased concepts, such as organisms, systems, etc, then life as a whole seems to lose a lot of its order. Imagine that with the help of a source of light we cultivate in a way some chemical reactions in a small place. After a period of time, they are getting more and more complicated. Let’s hypothesize that someday the whole system becomes extremely complicated. We get to a point where we see nothing more but a mixture of colors and shapes. This is life. But human is a part of this complicated system which means that he sees things in a mirror like way, because he is in the system. He is a sum of reactions that keep happening. So it is very difficult for him to see life (the other reactions) in a fully objective way, because he is running inside the whole system. It is all a matter of perspective.   For instance, the property of reproduction in living beings that are chemical reactions seems to actually be a result of the energy that forces the chemical reactions to continue happening. Life continues because chemical reactions continue. We as an internal part of this system, see this as regeneration of the creatures, but it’s only because we are running inside the system. Living organisms normally are also not dying because the chemical reactions that are composing them are continuing to happen. If we analyze all these reactions we will have a very good view of their homeostasis and the way they sustain themselves. As we said we are seeing the world from the inside, or else in a mirror like direction, because we ourselves are a part of things, so we appreciate things from its results. We think that homeostasis and self-sustainability are very magical and sophisticated self-sustaining mechanisms, because we are the result of homeostasis, but the theory that we analyzed says that homeostasis simply is the catalogue of the chemical reactions that are still happening, and just because they keep happening, the organism is alive. In other words, we find a purpose in every single reaction or procedure, but it's only because of our perspective. There is not a certain plan in the flask full of chemicals that is favored, however the system will continue happening. The final resulting reactions will appear to have survival capacities if the observers are exactly those resulting reactions. Everything that happened leads to them. So the final combination of reactions will be the most sustainable of all combinations, given the particular conditions, because that’s exactly what happened. Those reactions prevailed in the long term.   Life as we see it is simply the result of the chemical reactions on earth. As we said, we are part of the system and we don’t realize it, but if we were alien forms of life for example, and we were watching the earth from the outer space, then we would see only a very complicated network of reactions. According to this reasoning, life seems to be more of an invention of us, or else a concept that we use to describe anything that looks like us functionally. An organism is the reactions that we see, and we think they are something amazing because we see them separately from all the other reactions that are happening in the world. We judge them from their result, which is that they become like us. We are a part of the reactions that are happening as well, and while we see organisms that look like us, we think they are independent creatures, but actually they can’t be separated from the whole soup of reactions.   Question: Ok, the basic forms of life is chemistry, but as we go higher, we find levels of organization. Functions like killing, walking, talking etc gives some reactions an advantage to survive over others. But, surviving is only important because of us. If you ask an observer outside the system of life, he will not find any organization in these functions, because their results mean nothing to them.   Question: The described system of chemical reactions is one of increasing entropy and disorder over time. But this is in contrast with our long held belief that living beings are characterized by order, and thus a lowering entropy state (see ideas of Schrodiger). If we want to examine if entropy of living beings during evolution is actually increasing or decreasing, we must abandon human-created terms such as “order”, and instead check-out for entropy changes using more objective tools and concepts such as “heat release”, etc. For instance, one might argue that for a nonliving object, such as a random stone, all the reactions of living beings are meaningless. A stone only perceives life as a whole to be a chemical disordered chaos. On the other hand, we are what we are because of some properties of these reactions. Hence, through our perspective, there is a lot of order there. Remember that previously we said that human is not a neutral objective observer of things, but he is changing together with the system. This confuses him. It means that if human entropy is raising slower than the whole living systems entropy, he will think that his entropy is lowering. One example is this: Imagine a large number of birds that are flying one next to other to the same direction. If we tell them to fly one apart from the other, so the group will start separating, the entropy of the system will start raising. Imagine also that there are three birds that are very close to each other, somewhere in the group. If they separate with less speed than the others and we consider these 3 birds as a system the systems entropy will actually lower relatively with the whole system of the birds. As we said, we are viewing the world through our eyes. This can lead to some subjectivities and misconceptions of our viewpoint, especially with respect to systems in which we are ourselves involved. We can objectively judge changes in entropy in systems we are not involved, but in a system of reactions e.g A+B->C+D+….X+Z, if the reference frame (i.e. observer) is an insider subgroup of this system (for instance K+L->M+N), and judges changes in entropy inside larger systems, then this subset can only perceive entropy changes relatively to themselves. Remember the example of the birds.   Question: Someone might say that if living beings are only a sum of complex chemical reactions then what prevents them from degrading into chemical chaos? For instance, if there is not a major adverse event or a catastrophic external factor, how can a human maintain its body structure at a viable state for nearly 100 years instead of spontaneously degrading towards a higher entropic state? A possible answer lies in our inability to fully appreciate and comprehend big numbers.   (Note: The numbers used in this comment are rough approximations. They are used as an example in order to better explain my thoughts).  Let’s assume that human body everyday degrades towards a higher entropic state. Let’s assume for this reason, that after each day, the body loses, let’s say hypothetically 100 thousand of chemical reactions. Suppose we have an 80 years old man. He has lived 29200 days. This means that he has lost or changed nearly 3 billion reactions during his lifetime. If the total amount of chemical reactions he has in his body is, let’s say 1 trillion, then after 80 years he will be composed of 997 billion reactions, which means virtually still 1 trillion. So the impact of the whole process on the chemical reaction count will be almost negligible macroscopically.       Question: How can chemical reactions like that gain or sustain their repeatability, so we see repeated patterns in life (eg. reproduction)?   Although in theory a process that can protect some repeatable reactions can evolve and be selected, another option is possible, that personally I think is more likely to be the case. And the second thing is this: Are there truly repeatable processes in nature? For instance, if a descendant is 99% the same as its ancestor, and they are both composed of 100 trillion reactions, this means they differ by 1 trillion reactions. Also, if you have two systems of 100 organic compounds with various stereochemistries that interact with each other and they become increasingly complex to the point that each system becomes 100 trillions of different compounds, then one would expect that 99% percent of the compounds of one system will be somewhat similar to the other system, only as a result of pure chance. Now if two systems of 100 trillion reactions or possible interactions are exposed to the same chemical laws and conditions (variability prevails, hydrophobic bonds, adhesive properties prevail, stable molecules prevail, influx of external substances, same temperature, etc etc, then the two systems that will be mainly composed of the same substances, will share approximately the same fate, at least to our eyes. Because if by 95% the same thing happens in both systems, this means they differ by many trillion reactions, but for us, it is enough to consider the two processes identical.
Peer review in the CiSE RR Track
Lorena A. Barba
George K. Thiruvathukal

Lorena A. Barba

and 1 more

May 05, 2018
In our editorial launching the new Reproducible Research Track in CiSE \citep*{Barba_2017} , we promised to explore innovations to the peer-review process. Because we require articles submitted to this track to adhere to practices that safeguard reproducibility, we must review for these aspects deliberately. For each submission, a reproducibility reviewer will be charged with checking availability, quality and usability of digital artifacts (data, code, figures). This reviewer (sometimes one of the track editors) will be known to the authors, and may interact with the authors during the review—for example, opening issues on a code repository. For this service, we ask that the authors recognize the reviewer in the article's acknowledgements section.
Review of Homology-directed repair of a defective glabrous gene in Arabidopsis with C...
Elsbeth Walker
dchanrod

Elsbeth Walker

and 9 more

October 03, 2019
Homology-directed repair of a defective glabrous gene in Arabidopsis with Cas9-based gene targeting  [Florian Hahn, Marion Eisenhut, Otho Mantegazza, Andreas P.M. Weber, January 5, 2018, BioRxiv] [https://doi.org/10.1101/243675]Overview and take-home messages:    Hahn et al. have compared the efficiencies of two different methods that have been previously reported to enhance the frequency of homologous recombination in plants. The paper has focused on testing a viral replicon system with two different enzymes, nuclease and nickase, as well as an in planta gene targeting (IPGT) system in  Arabidopsis thaliana. Interestingly, authors have chosen GLABROUS1 (GL1), a regulator of trichome formation, as a visual marker to detect Cas9 activity and therefore homologous recombination. A 10 bp deletion in the coding region of GL1 gene produces plants devoid of trichomes. Out of the two methods in planta gene targeting approach successfully restored trichome formation in less than 0.2% of ~2,500 plants screened, whereas the method based on viral replicon machinery did not manage to restore trichome formation at all. This manuscript is of high quality, experiments are well designed and executed. However, there are some concerns that could be addressed in the next preprint or print version. Below are some feedback and suggestions that we hope will improve the manuscript.
← Previous 1 2 … 2738 2739 2740 2741 2742 2743 2744 2745 2746 … 2754 2755 Next →

| Powered by Authorea.com

  • Home