AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 66,104 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Protein-protein interaction analysis of 2DE  proteomic data of  desiccation responsi...
Ryman Shoko

Ryman Shoko

December 14, 2020
AbstractA lot of research has focused on investigating  mechanisms  of vegetative desiccation tolerance in resurrection plants. Various approaches have been used to undertake such research and these include high throuput approaches such as the 'omics' - transcriptomics and metabolomics. Proteomics has since become more prefarable than transcriptomics as it it provides a view of the end-point of gene expression. However, most proteomics investigations in literature publish differentially expresses protein lists and attempt to interpret such lists in isolation. This is despite the fact that proteins do not act in isolation.  A comprehensive bioinformatics investigation can reveal more information on the desiccation tolerance mechanism of resurrection plants. In this work, a comprehensive bioinformatic analysis of the published proteomic results in  Ingle et al. (2007) was carried out. GeneMania was used to carry out protein-protein interaction studies while ClueGo was used to identify GO biological process terms.  A preliminary map of protein-protein interactions was built up and these led to the  predicted of more proteins that are likely to to be connect to the ones identified by Ingle et al. (2007).  Briefly, whereas 2DE proteomics identified 17 proteins as being differentially regulated  (4 de novo, 6 up-regulated and 7 down-regulated), GeneMania managed to add 57 more proteins  to the network (de novo - 20, up-regulated - 17 and down-regulated - 20). Each protein set has unique GO biological process terms overrepresented in it.  This study explores the protein pathways affected by desiccation stress from an interactomic prospective highlighting the importance of advanced bioinformatic analysis.   Introduction  Resurrection plants can survive extreme water loss and survive long periods in an abiotic state and upon watering, rapidly restore their normal metabolism (reviewed inter alia in  Farrant, 2007).  Understanding the mechanisms of desiccation tolerance (DT) in resurrection plants is important as they are deemed to be an excellent model to study the mechanisms associated with DT.   Proteomic profiling offers the opportunity to identify proteins that mediate the pathways involved in the DT mechanisms, when cells are subjected to desiccation stress.  A number of proteomics studies have been reported for leaves of some angiosperm resurrection plants during desiccation (Röhrig et al., 2006; Ingle et al., 2007; Jiang et al., 2007; Abdalla et al., 2010; Wang et al., 2010; Oliver et al., 2011; Abdalla and Rafudeen, 2012 etc.).  Since DT involves the integrated actions of many proteins, a systems-level understanding of experimentally derived proteomics data is essential to gain deeper insights into the protection mechanisms employed by resurrection plants against desiccation.  In recent years, an increasing emphasis has been put on integrated analysis of gene expression data via protein protein interactions (PPI), which are widely applied in interaction prediction, functional modules identification and protein function prediction. In this work, PPI analysis is applied to study the proteomics data obtained by Ingle et al. (2007) during the desiccation of Xerophyta viscosa leaves. In their study, using 2DE, they identified 17 desiccation responsive proteins(4 de novo, 6 up-regulated and 7 down-regulated). The aim of the work is to establish if the proteins in each set interact and if they do, the second aim would be to establish if there are any statistically significant GO biological process terms that can be observed in each set.   Methods Protein listsThe initial protein lists used in PPI analyses in this work were obtained from the 2DE data from Ingle et al. (2007) - (see Table 2 in  Ingle et al. (2007)).  Protein-protein integration  The Cytoscape v3.8.1  (Shannon et al., 2003)  app GeneMANIA (Warde-Farley et al., 2010), was used to derive the interactome of empirically determined and predicted PPIs of differentially regulated gene lists.  Protein lists for 'up-regulated', 'down-regulated' and 'de novo' proteins were used  as query lists for PPI studies.  Arabidopsis thaliana analogs of the desiccation responsive protein sets were used as query genes, and the program was run with default settings. GO biological process functional enrichment analysis The Cytoscape app ClueGO v2.5.7 (Bindea et al., 2009) was used for enrichment of GO biological process terms. ClueGO extracts the non-redundant biological information for groups of genes/proteins using GO terms and can conduct cluster – cluster comparisons. In the present study, for input, TAIR identifiers from the extended list of desiccation responsive proteins obtained from GeneMania were used as protein cluster lists and ontology terms were derived from A. thaliana.   The ClueGO ‘cluster comparison’ allowed the  identification of  biological process terms that were unique to each protein/gene list.
On the sources of systemic risk in cryptocurrency markets
Dr. Percy Venegas

Dr. Percy Venegas

March 11, 2018
Value in algorithmic currencies resides literally in the information content of the calculations; but given the constraints of consensus (security drivers) and the necessity for network effects (economic drivers), the definition of value extends to the multilayered structure of the network itself --that is, to the information content of the topology of the nodes in the blockchain network, and, on the complexity of the economic activity in the peripheral networks of the web, mesh-IoT networks, and so on. In this phase change between the information flows of the native network that serves as the substrate to the blockchain, and that of the real-world data, is where a new "fragility vector" emerges. Our research question is whether factors related to market structure and design, transaction and timing cost, price formation and price discovery, information and disclosure, and market maker and investor behavior, are quantifiable to the degree that can be used to price risk in digital asset markets. The results obtained show that while in the popular discourse blockchains are considered robust and cryptocurrencies anti-fragile, the cryptocurrency markets are in fact fragile. This research is pertinent to the regulatory function of governments, that are actively seeking to advance the state of knowledge regarding systemic risk, to develop policies for crypto markets, and for investors, who are in need of expanding their understanding of market behavior beyond explicit price signals and technical analysis. 
The hitchhikers' guide to reading and writing health research
Arindam Basu

Arindam Basu

February 23, 2021
In this paper, we introduce the concepts of critically reading research papers and writing of research proposals and reports. Research methods is a general term that includes the processes of observation of the world around the researcher, linking background knowledge with foreground questions, drafting a plan of collection of data and framing theories and hypotheses, testing the hypotheses, and finally, drafting or writing the research to evoke new knowledge. These processes vary with the themes and disciplines that the researcher engages in; nevertheless, common motifs can be found. In this paper, we propose three methods are interlinked: a deductive reasoning process where the structure of the thought can be captured critically; an inductive reasoning method where the researcher can appraise and generate generalisable ideas from observations of the world; and finally, abductive reasoning method where the world can be explained or the phenomena observed can be explained or be accounted for. This step or reasoning is also about framing theories, testing and challenging established knowledge or finding best theories and how theories best fit the observations. We start with a discussion of the different types of statements that one can come across in any scholarly literature or even in lay or semi-serious literature, appraise them, and identify arguments from non-arguments, and explanations from non-explanations. Then we outline three strategies to appraise and identify reasonings in explanations and arguments. We end with a discussion on how to draft a research proposal and a reading/archiving strategy of research.
Margin-of-Error Calculator for Interpreting Student  and Course Evaluation Data
Kenneth Royal, PhD

Kenneth Royal, PhD

February 19, 2018
OverviewAn online calculator was created to help college faculty and K-12 teachers discern the adequacy of a sample size and/or response rate when interpreting student evaluation of teaching (SETs) results. The online calculator can be accessed here: http://go.ncsu.edu/cvm-moe-calculator. About the calculator One of the most common questions consumers of course and instructor evaluations (also known as “Student Evaluations of Teaching”) ask pertains to the adequacy of a sample size and response rate. Arbitrary guidelines (e.g., 50%, 70%, etc.) that guide most interpretive frameworks are misleading and not based on empirical science. In truth, the sample size necessary to discern statistically stable measures depends on a number of factors, not the least of which is the degree to which scores deviate on average (standard deviation). As a general rule, scores that vary less (e.g., smaller standard deviations) will require a smaller sample size (and lower response rate) than scores that vary more (e.g., larger standard deviations). Traditional MOE formulas do not account for this detail, thus this MOE calculator is unique in that it computes a MOE with score variation taken into consideration. Other details about the formula also differ from traditional MOE computations (e.g., use of a t-statistic as opposed to a z-statistic, etc.) to make the formula more robust for educational scenarios in which smaller samples often are the norm. This MOE calculator is intended to help consumers of course and instructor evaluations make more informed decisions about the statistical stability of a score. It is important to clarify that the MOE calculator can only speak to issues relating to sampling quality; it cannot speak to other types of errors (e.g., measurement error stemming from instrument quality, etc.) or biases (e.g., non-response bias, etc.). Persons interested in learning more about the MOE formula, or researchers reporting MOE estimates using the calculator should read/cite the following papers: James, D. E., Schraw, G., & Kuch, F. (2015). Using the sampling margin of error to assess the interpretative validity of student evaluations of teaching. Assessment & Evaluation in Higher Education, 40(8), 1123-41. doi:10.1080/02602938.2014.972338. Royal, K. D. (2016). A guide for assessing the interpretive validity of student evaluations of teaching in medical schools. Medical Science Educator, 26(4), 711-717. doi:10.1007/s40670-016-0325-9. Royal, K. D. (2017). A guide for making valid interpretations of student evaluations of teaching (SET) results. Journal of Veterinary Medical Education, 44(2), 316-322. Doi: 10.3138/jvme.1215-201r. Interpretation guide for course and instructor evaluation results Suppose a course consists of 100 students (population size), but only 35 (sample size) students complete the course (or instructor) evaluation, resulting in a 35% response rate. The mean rating for the evaluation item “Overall quality of course” was 3.0 with a standard deviation (SD) of 0.5. Upon entering the relevant values into the Margin Of Error (MOE) calculator, we see this would result in a MOE of 0.1385 when alpha is set to .05 (95% confidence level). In order to use this information, we need to do two things: First, include the MOE value as a ± value in relation to the mean. Using the example above, we can say with 95% confidence that the mean of 3.5 could be as low as 2.8615 or as high as 3.1385 for the item “Overall quality of course”. Next, in order to understand the MOE percentage, we must first identify the length of the rating scale and its relation to the MOE size. For example, if using a 4-point scale we would use an inclusive range of 1-4, where the actual length of the scale is 3 units (e.g., distance from 1 to 2; 2 to 3; and 3 to 4). So, a 3% MOE would equate to 0.09 (e.g., 3 category units x 3.00% = 0.09). Similarly, a 5-point scale would use an inclusive range of 1-5, where the actual length of the scale is 4 units. In this case, a 3% MOE would equate to 0.12 (e.g., 4 category units x 3.00% = 0.12). Finally, we would refer to the interpretation guide (below) to make an inference about the interpretative validity of the score. In the above example the MOE for the item “Overall quality of course” was 0.1385. If we are using a 4-point scale, this value falls between 0.09 to 0.15, which corresponds to 3 to 5% of the scale (this is good!). So, we could infer the 35 students who completed the evaluation (sample) is a sufficient sample size from a course consisting of 100 students (population) to yield a statistically stable result for the item “Overall quality of course”, as the margin of error falls between ± 3-5%. Note: It is important to keep in mind that 35 students are adequate in this specific example because the scores deviated on average (standard deviation) by 0.5. If the standard deviation for the item was, say, 1.0, then 35 students would have yielded a MOE of 0.2769. This value would greatly exceed 0.15, indicating the MOE is larger than 5%, and would call into question the statistical stability of the score in this scenario. For a 4-point rating scale:*Please note the interpretation guide does not consist of rigid rules, but merely reasonable recommendations. Margin of Error Margin of Error (%) Interpretive Validity* Less than 0.09 Less than ± 3% Excellent interpretive validity Between 0.09-0.15 Between ± 3-5% Good interpretative validity Greater than 0.15 Greater than ± 5% Questionable interpretative validity; values should be interpreted with caution For a 5-point rating scale:*Please note the interpretation guide does not consist of rigid rules, but merely reasonable recommendations. Margin of Error Margin of Error (%) Interpretive Validity* Less than 0.12 Less than ± 3% Excellent interpretive validity Between 0.12–0.20 Between ± 3-5% Good interpretative validity Greater than 0.20 Greater than ± 5% Questionable interpretative validity; values should be interpreted with caution Example at NC State University:
Discussing the culture of preprints with auditory neuroscientists
Daniela Saderi, Ph.D.
Adriana Bankston

Daniela Saderi, Ph.D.

and 1 more

February 28, 2018
I started writing this memo while on an airplane, flying back from sunny San Diego. While definitely one of the highlights of the trip, the sunshine was not the reason for my visit to Southern California. Instead, I was there with hundreds of other auditory neuroscientists from all over the world to attend the 41th MidWinter Meeting of the Association for Research in Otolaryngology (ARO). 
Agate Analysis by Raman, XRF, and Hyperspectral Imaging Spectroscopy for Provenance...
Aaron J. Celestian

Aaron J. Celestian

and 7 more

January 30, 2019
AbstractThe Getty Institute recently acquired the Borghese-Windsor Cabinet (Figure \ref{620486}), a piece of furniture extensively decorated with agate, lapis lazuli, and other semi-precious stones.  The cabinet is thought to have been built around 1620 for Camillo Borghese (later Pope Paul V).  The Sixtus Cabinet, built around 1585 for Pope Sixtus V (born Felice Peretti di Montalto), is of similar design to the Borghese-Windsor and also ornately decorated with gemstones.  Although there are similarities in gemstones between the two cabinets, the Sixtus and Borghese-Windsor cabinets vary in their agate content.  It was traditionally thought that all agate gemstones acquired during the 16th and 17th centuries were sourced from the Nahe River Valley near Idar-Oberstein, Germany.  It is known that Brazilian agate began to be imported into Germany by the 1800s, but it is possible that some was imported in the 18th century or earlier.  A primary research goal was to determine if the agates in the Borghese-Windsor Cabinet are of single origin, or if they have more than one geologic provenance. Agates are made of SiO2, mostly as the mineral quartz, but also as metastable moganite.  Both quartz and moganite will crystallize together as the agate forms, but moganite is not stable at Earth's surface and will convert to quartz over tens of millions of years \cite{Moxon_2004,Peter_J_Heaney_1995,G_slason_1997}, thus relatively older agate contains less moganite.  Agate from the Idar-Oberstein is Permian in age (around 280 million years old), while agate from Rio Grande do Sul of Brazil generally formed during the Cretaceous (around 120 million years old).  It is thought that Rio Grande do Sul would have been a primary source of material exported to Europe because it is one of Brazil's oldest and largest agate gemstone producers.  Since Cretaceous agate from Brazil is many millions of years younger than Permian agate from Germany, the quartz to moganite ratios between the two localities should be quite different.  The agate gemstones of the Borghese-Windsor Cabinet cannot be removed for detailed Raman mapping experiments.    Because of this, we first analyzed multiple agate specimens from the collections of the Natural History Museum of Los Angeles (NHMLA) and the Smithsonian Institution National Museum of Natural History (NMNH) using three different techniques: Raman mapping, XRF mapping, and hyperspectral imaging. Raman spectroscopy provides an easy method to distinguish the relative quartz:moganite ratios and XRF analysis provides a measure of bulk geochemistry in agates.  Maps have advantages over line scans and point analysis in that they give a better representation of the mineral content, can be used to exclude trace mineral impurities, and yield better counting statistics and averaging.   Hyperspectral imaging provides a range of optical data from IR through UV wavelengths.   
PREreview of "Frequent lack of repressive capacity of promoter DNA methylation identi...
Hector Hernandez-Vargas

Hector Hernandez-Vargas

February 18, 2018
This is a review of the preprint "Frequent lack of repressive capacity of promoter DNA methylation identified through genome-wide epigenomic manipulation" by Ethan Edward Ford,  Matthew R. Grimmer,  Sabine Stolzenburg,  Ozren Bogdanovic,  Alex de Mendoza,  Peggy J. Farnham,  Pilar Blancafort, and  Ryan Lister.The preprint was originally posted on bioRxiv on September 20, 2017 (DOI: https://doi.org/10.1101/170506).  
The State Of Stablecoins- Why They Matter & Five Use Cases
Sheikh Mohammed Irfan
Robert Samuel Keaoakua Lin

Sheikh Mohammed Irfan

and 2 more

July 01, 2019
Price-stable cryptocurrencies, commonly referred to as stablecoins, have received a significant amount of attention recently. Much of this has been in hopes that they can fix some of the issues with cryptocurrency—most notably price instability. However, little analysis has been done with respect to the drivers and investment potential of stablecoins. Stablecoins fulfill different functions of money based on their implementation. As a result, they have unique trade-offs from one another and from physical currency (fiat) itself. Stablecoins offer a similar value proposition to fiat, but the two should not be compared on a one-to-one basis as stablecoins contain unique trade-offs and benefits. These differences will drive the demand for these tokens while enabling specific use cases. The purpose of this paper is to shed light on the adoption and the potential of market share growth for stablecoins given five selected use cases: dollarization, smart contracts, peer to peer (P2P) and peer to business (P2B payments), safe haven for exchanges, and as a reserve currency. We will discuss the opportunities within each of these use cases and assess the factors which will determine the success of stablecoins. Using insights contained in this paper, technologists can think about how best to position themselves in the short, medium, and long term. 
Emerging Countries and Trends in the World Trade Network: A Link Analysis Approach
Yash Raj Lamsal

Yash Raj Lamsal

July 26, 2019
Abstract The landscape in the world trade network has changed in last few decades.  This paper analyses World Trade Network (WTN) from 1990 to 2016, using the trade data available at the International Monetary Fund (IMF) website and presents the evolution of key players in the network using link analysis properties. Link Analysis analyzes the link strength between nodes of a network to evaluate the properties of the network. The paper uses link analysis algorithm such as PageRank, hubs, and authority to evaluate the strength or importance of nodes in the World Trade Network. A higher PageRank represents higher import dependencies, higher authority scores of a country denotes its significance to import from other hub countries, and a higher hub score indicates a country’s significance to export their final product to other authority sectors. The findings show the emergence of Asian countries, especially China, as key players in the world.  Key Words: World Trade Network, link analysis, PageRank, Authority, Hubs Introduction The value of global total export in the year 2016 is almost five (4.96) times the value in the year 1990. This fivefold growth in trade value is largely contributed by Emerging Market Economics (EMEs) \cite{Riad2012}. This indicates trade plays a vital role in the national economy as well as in the international economy. In this context, studying world trade from complex network perspectives provides meaningful insights. World Trade Network (WTN) is weighted directed complex network of countries around the world. In network science, a network is a collection of nodes and links, links are relations between the nodes and, in graph theory, a graph is a collection of vertices and edges, where edges are a relationship between vertices. Graph and Network are terms used interchangeably in this paper. For the WTN nodes are represented by the countries around the world and link represents the relationship between two countries, where the relationship is a flow of trade from one country to another. Study of the WTN applying network and graph theory framework has been growing and could be found in these works of literature \cite{Reyes2014,Deguchi2014}  \cite{Ermann2011,Benedictis2010}. This paper uses link analysis algorithms to analyze the WTN. Link analysis extracts information from a connected structure like the WTN \cite{Chakraborty}. Understanding such connected structure of trade furnish an immense source of information about the world economy, and this paper uses approaches, which was initially adopted to understand the World Wide Web (WWW) \cite{Kleinberg1999}. Link analysis methods are also used to identify the expert in Social Network \cite{Kardan2011}. This paper Link analysis algorithms HITS (Hypertext Induced Topic Search) \cite{Kleinberg1999}and PageRank \cite{Page1998} algorithms are used to find the importance of countries based on the value export amount from one country to another. HITS and PageRank are also among the most frequently cited web information retrieval algorithms (Langville & Meyer, 2005). Link Analysis of the WTN gives importance value to the countries of the WTN.  This paper study and analyze the WTN data from 1990 to 2016 as a weighted-directed network. Using the graph framework and applying link analysis perspectives, the paper tries to figure out the emerging countries and their evolution during the study period. The following section describes the link analysis algorithms used in the study and the subsequent section describes and discusses the finding.Hits Algorithm HITS algorithm is also known as hubs and authorities algorithm (Kleinberg, 1999). This algorithm gives hubs and authority ranking for each member of the network. Hubs score of a node represents the sum of the authority score of all of the nodes which are pointing to this node. The authority score represents the sum of the hub score of all nodes pointing to this node. Hubs and authorities exhibit a mutually reinforcing relationship: a good hub is a node that points to many good authorities; a good authority is a node that is pointed to by many good hubs (Kleinberg, 1999). In the WTN hubs are countries with large export value and export to good authority countries, and authority is a country with large import values and import from good hubs countries.  
DiversityNet: a collaborative benchmark for generative AI models in chemistry
Mostapha Benhenda
Esben Jannik Bjerrum

Mostapha Benhenda

and 3 more

May 10, 2019
Commenting on the document is possible without registration, but for editing, you need to:Register on Authorea: https://www.authorea.com/Join the DiversityNet group: https://www.authorea.com/inst/18886Come back hereCode: https://github.com/startcrowd/DiversityNetBlog post: https://medium.com/the-ai-lab/diversitynet-a-collaborative-benchmark-for-generative-ai-models-in-chemistry-f1b9cc669cbaTelegram chat: https://t.me/joinchat/Go4mTw0drJBrCdal0JWu1gGenerative AI models in chemistry are increasingly popular in the research community. They have applications in drug discovery and organic materials (solar cells, semi-conductors). Their goal is to generate virtual molecules with desired chemical properties (more details in this blog post). However, this flourishing literature still lacks a unified benchmark. Such benchmark would provide a common framework to evaluate and compare different generative models. Moreover, this would allow to formulate best practices for this emerging industry of ‘AI molecule generators’: how much training data is needed, for how long the model should be trained, and so on.That’s what the DiversityNet benchmark is about. DiversityNet continues the tradition of data science benchmarks, after the MoleculeNet benchmark (Stanford) for predictive models in chemistry, and the ImageNet challenge (Stanford) in computer vision.
Alternative method for modelling structural and functional behaviour of a Storage Hyd...
Carlos Graciós
Rosa María

Carlos Graciós

and 4 more

October 16, 2018
Nowadays, the primaryn challenge is the Efficient use of energy in all the world. International and regional efforts have been developed to address it, including government aggrements and sectorial projetcs between academic, industrial , social and entrepreneurs as well.Nevertheless, the traditional and well-known strategies applied in particular to generate, storage, distribute and use of the Electrical Power is high exploited. Considering the politics about the responsible production and use of electrical energy in the countries, the HydroElectrical Plants (HEP) accomplish the restricted requirements in this context. England is one of the specific countries that represent the principal equilibrium between efficient in production and use of Electrical Power with the 1/3 of the electrical grid covered by Dinorwig HEP. This plant have been studied and improved with several strategies...
Plant Biology Journal Club 
Elsbeth Walker
Ahmed Ali

Elsbeth Walker

and 10 more

March 07, 2018
Medicago truncatula Zinc-Iron Permease6 provides zinc to rhizobia-infected nodule cells [Isidro Abreu, Angela Saez, Rosario Castro-Rodriguez, Viviana Escudero, Benjamin Rodriguez-Haas,  Marta Senovilla, Camille Laure, Daniel Grolimund, Manuel Tejada-Jimenez, Juan Imperial, Manuel Gonzalez-Guerrero , January 24, 2017 (preprint),  September 21, 2017 (in print), BioRxiv & Wiley-Blackwell]
Calculate tract based weighted means
Do Tromp

Do Tromp

February 06, 2018
Extracting the weighted means of individual fiber pathways can be useful when you want to quantify the microstructure of an entire white matter structure. This is specifically useful for tract based analyses, where you run statistics on specific pathways, and not the whole brain. You can read more on the distinction between tract based and voxel based analyses here: http://www.diffusion-imaging.com/2012/10/voxel-based-versus-track-based.html. The prerequisite steps to get to tract based analyses are described in the tutorials on this website: http://www.diffusion-imaging.com. In the first tutorial we covered how to processed raw diffusion images and calculate tensor images. In the second tutorial we described how to normalize a set of diffusion tensor images (DTI) and run statistics on the normalized brain images (including voxel bases analyses). In the last tutorial we demonstrated who to iteratively delineated fiber pathways of interest using anatomically defined waypoints. Here we will demonstrate and provide code examples on how to calculate a weighted mean scalar value for entire white matter tracts. The principle relies on using the density of tracts running through each voxel as a proportion of the total number of tracts in the volume to get a weighted estimate. Once you have a proportional index map for each fiber pathway of interest you can multiply this weighting factor by the value of the diffusion measure (e.g. FA) in that voxel, to get the weighted scalar value of each voxel. Shout out to Dr. Dan Grupe, who initiated and wrote the core of the weighted mean script. As a note, this script can also be used to extract cluster significance from voxel-wise statistical maps. See an example of this usage at the end of this post. The weighted mean approach allows for differential weighting of voxels within a white matter pathway that have a higher fiber count, which is most frequently observed in areas more central to the white matter tract of interest. At the same time this method will down-weigh voxels at the periphery of the tracts, areas that are often suffering from partial voluming issues, as voxels that contain white matter also overlap gray matter and/or cerebrospinal fluid (CSF). To start off you will need a NifTI-format tract file, for example as can be exported from Trackvis. See more details on how to do this in Tutorial 3. You also need scalars, like FA or MD files. Overview of software packages used in this code: TrackVis  by MGH (download TrackVis here)http://trackvis.org/docs/ fslstats  by FSL (download FSL here)http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils fslmaths  by FSL (download FSL here)http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils Save the weighted mean code below into a text file named “weighted_mean.sh”. Make sure the file permissions for this program are set to executable by running this line after saving. chmod 770 weighted_mean.sh Note that the code in ’weighted_mean.sh” assumes: A base directory where all folder are located with data: ${baseDir} A text file with the structures you want to run. Here again the naming is defined by what the name of the file is, and that it is located in the main directory, in a folder called STRUCTURES: ${baseDir}/STRUCTURES/${region} A text file with the scalars you want to run. The naming here is defined by how your scalar files are appended. E.g. “subj_fa.nii.gz”; in this case “fa” is the identifier of the scalar file: ${scalar_dir}/*${sub}*${scalar}.nii* The location of all the scalar files in the scalar directory : ${scalar_dir} A list of subject prefixes that you want to run. Weighted mean code: #!/bin/bash # 2013-2018 # Dan Grupe & Do Tromp if [ $# -lt 3 ] then echo echo ERROR, not enough input variables echo echo Create weighted mean for multiple subjects, for multiple structures, for multiple scalars; echo Usage: echo sh weighted_mean.sh {process_dir} {structures_text_file} {scalars_text_file} {scalar_dir} {subjects} echo eg: echo echo weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/etc S01 S02 echo else baseDir=$1 echo "Output directory "$baseDir structures=`cat $2` echo "Structures to be run "$structures scalars=`cat $3` echo "Scalars to be run "$scalars scalar_dir=$4 echo "Directory with scalars "$scalar_dir cd ${baseDir} mkdir -p -v ${baseDir}/weighted_scalars finalLoc=${baseDir}/weighted_scalars shift 4 subject=$* echo echo ~~~Create Weighted Mean~~~; for sub in ${subject}; do cd ${baseDir}; for region in ${structures}; do img=${baseDir}/STRUCTURES/${region}; final_img=${finalLoc}/${region}_weighted; for scalar in ${scalars}; do #if [ ! -f ${final_img}_${sub}_${scalar}.nii.gz ]; #then scalar_image=${scalar_dir}/*${sub}*${scalar}.nii* #~~Calculate voxelwise weighting factor (number of tracks passing through voxel)/(total number of tracks passing through all voxels)~~ #~~First calculate total number of tracks - roundabout method because there is no 'sum' feature in fslstats~~ echo echo ~Subject: ${sub}, Region: ${region}, Scalar: ${scalar}~ totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; echo avgDensity=`fslstats ${img} -M`; avgDensity=`fslstats ${img} -M`; echo totalTracksFloat=`echo "$totalVolume * $avgDensity" | bc`; totalTracksFloat=`echo "$totalVolume * $avgDensity" | bc`; echo totalTracks=${totalTracksFloat/.*}; totalTracks=${totalTracksFloat/.*}; #~~Then divide number of tracks passing through each voxel by total number of tracks to get voxelwise weighting factor~~ echo fslmaths ${img} -div ${totalTracks} ${final_img}; fslmaths ${img} -div ${totalTracks} ${final_img}; #~~Multiply weighting factor by scalar of each voxel to get the weighted scalar value of each voxel~~ echo fslmaths ${final_img} -mul ${scalar_image} -mul 10000 ${final_img}_${sub}_${scalar}; fslmaths ${final_img} -mul ${scalar_image} -mul 10000 ${final_img}_${sub}_${scalar}; #else # echo "${region} already completed for subject ${sub}"; #fi; #~~Sum together these weighted scalar values for each voxel in the region~~ #~~Again, roundabout method because no 'sum' feature~~ echo totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; echo avgWeightedScalar=`fslstats ${final_img}_${sub}_${scalar} -M`; avgWeightedScalar=`fslstats ${final_img}_${sub}_${scalar} -M`; value=`echo "${totalVolume} * ${avgWeightedScalar}"|bc`; echo ${sub}, ${region}, ${scalar}, ${value} >> ${final_img}_output.txt; echo ${sub}, ${region}, ${scalar}, ${value}; #~~ Remember to divide final output by 10000 ~~ #~~ and tr also by 3 ~~ rm -f ${final_img}_${sub}_${scalar}*.nii.gz done; done; done; fi Once the weighted mean program is saved in a file you can start running code to run groups of subjects. See for example the script below. Run Weighted Mean for group of subjects: #Calculate weighted means: echo fa tr ad rd > scalars_all.txt echo CING_L CING_R UNC_L UNC_R > structures_all.txt sh /Volumes/Vol/processed_DTI/SCRIPTS/weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/Vol/processed_DTI/scalars S01 S02 S03 S04; Once that finishes running you can organize the output data, and divide the output values by 10000. This is necessary because to make sure the output values in the weighted mean code have sufficient number of decimals they are multiplied by 10000. Furthermore, this code will also divide trace (TR) values by 3 to get the appropriate value of mean diffusivity (MD = TR/3). Organize output data: cd /Volumes/Vol/processed_DTI/weighted_scalars for scalar in fa tr ad rd; do for structure in CING_L CING_R UNC_L UNC_R; do rm -f ${structure}_${scalar}_merge.txt; echo "Subject">>subject${scalar}${structure}.txt; echo ${structure}_${scalar} >> ${structure}_${scalar}_merge.txt; for subject in S01 S02 S03 S04; do echo ${subject}>>subject${scalar}${structure}.txt; if [ "${scalar}" == "tr" ] then var=`cat *_weighted_output.txt | grep ${subject}|grep ${structure}|grep ${scalar}|awk 'BEGIN{FS=" "}{print $4}'`; value=`bc <<<"scale=8; $var / 30000"`;echo $value >> ${structure}_${scalar}_merge.txt; else var=`cat *_weighted_output.txt | grep ${subject}|grep ${structure}|grep ${scalar}|awk 'BEGIN{FS=" "}{print $4}'`; value=`bc <<<"scale=8; $var / 10000"`;echo $value >> ${structure}_${scalar}_merge.txt; fi done mv subject${scalar}${structure}.txt subject.txt; cat ${structure}_${scalar}_merge.txt; done done #Print data to text file and screen rm all_weighted_output_organized.txt paste subject.txt *_merge.txt > all_weighted_output_organized.txt cat all_weighted_output_organized.txt This should provide you with a text file with columns for each structure & scalar combination, with rows for each subject. You can then export this to your favorite statistical processing software. Finally as promised. Code to extract significant cluster from whole brain voxel-wise statistics, in this case from FSL’s Randomise output:Extract binary files for each significant clusters: #Extract Cluster values for all significant maps dir=/Volumes/Vol/processed_DTI/ cd $dir; rm modality_index.txt; for study in DTI/STUDY_randomise_out DTI/STUDY_randomise_out2; do prefix=`echo $study|awk 'BEGIN{FS="randomise_"}{print $2}'`; cluster -i ${study}_tfce_corrp_tstat1 -t 0.95 -c ${study}_tstat1 --oindex=${study}_cluster_index1; cluster -i ${study}_tfce_corrp_tstat2 -t 0.95 -c ${study}_tstat2 --oindex=${study}_cluster_index2; num1=`fslstats ${study}_cluster_index1.nii.gz -R|awk 'BEGIN{FS=" "}{print $2}'|awk 'BEGIN{FS="."}{print $1}'`; num2=`fslstats ${study}_cluster_index2.nii.gz -R|awk 'BEGIN{FS=" "}{print $2}'|awk 'BEGIN{FS="."}{print $1}'`;  echo $prefix"," $num1 "," $num2; echo $prefix"," $num1 "," $num2>>modality_index.txt; #loop through significant clusters count=1 while [ $count -le $num1 ]; do fslmaths ${study}_cluster_index1.nii.gz -thr $count -uthr $count -bin /Volumes/Vol/processed_DTI/STRUCTURES/${prefix}_${count}_neg.nii.gz; let count=count+1 done count=1 while [ $count -le $num2 ]; do fslmaths ${study}_cluster_index2.nii.gz -thr $count -uthr $count -bin /Volumes/Vol/processed_DTI/STRUCTURES/${prefix}_${count}_pos.nii.gz; let count=count+1 done done Extract cluster means: #Extract TFCE cluster means rm -f *_weighted.nii.gz rm -f *_weighted_output.txt; rm -f *_merge.txt; cd /Volumes/Vol/processed_DTI; for i in DTI/STUDY_randomise_out DTI/STUDY_randomise_out2; do prefix=`echo ${i}|awk 'BEGIN{FS="/"}{print $1}'`; suffix=`echo ${i}|awk 'BEGIN{FS="/"}{print $2}'`; rm structures_all.txt; cd /Volumes/Vol/processed_DTI/STRUCTURES; for j in `ls ${prefix}_${suffix}*`; do pre=`echo ${j}|awk 'BEGIN{FS=".nii"}{print $1}'`; echo $pre >> /Volumes/Vol/processed_DTI/structures_all.txt; done cd /Volumes/Vol5/processed_DTI/NOMOM2/TEMPLATE/normalize_2017; rm scalars_all.txt; echo $suffix > scalars_all.txt; sh ./weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/Vol/processed_DTI/${prefix}/${suffix} S01 S02 S03 S04; done Finally, run the previous code to organize the output data.
What changes emerge when translating feminist literature from English into Polish?  ...
Monika Andrzejewska

Monika Andrzejewska

March 09, 2018
Abstract The aim of this essay is to investigate to what extent gender matters in translation. The discussion centres on the translation of English feminist writings into Polish: „A Room of One’s Own”, „Orlando” and „Written on the Body”. Unlike English, Polish is a highly inflected language, which requires gendered choices in the language used to describe characters. Thus, there is a risk that the translation may distort the original meaning of the whole text. I will begin by introducing some concepts of feminist theory of translation, which draw attention to gender issues. Then, I will analyse the Polish translations of books in question. The main argument of this essay is that because translating sexual ambiguity into Polish is impossible, feminist translators may take it to their advantage to transfer their own attitudes. This ultimately may shape the overall perception of the book and the author by a given readership.
THE TRUTH IS IN THE SOUL OF BEHOLDER - Silence
Igor Korosec

Igor Korosec

July 17, 2018
“In silence and movement you can show the reflection of people.”  Marcel Marceau
Why we should use balances and machine learning to diagnose ionomes
Essi Parent

Essi Parent

January 20, 2020
The performance of a plant can be predicted from its ionome (concentration of elements in a living tissue) at a specific growth stage. Diagnoses have yet been based on simple statistical tools by relating a Boolean index to a vector of nutrient concentrations or to unstructured sets of nutrient ratios. We are now aware that compositional data such as nutrient concentrations should be carefully preprocessed before statistical modeling. Projecting concentrations to isometric log-ratios confer a Euclidean space to compositional data, similar to geographic coordinates. By comparing projected nutrient profiles to a geographical map, this perspective paper shows why univariate ranges and ellipsoids are less accurate to assess the nutrient status of a plant from its ionome compared to machine learning models. I propose an imbalance index defined as the Aitchison distance between an imbalanced specimen to the closest balanced point or region in a reference data set. I also propose and raise some limitations of a recommendation system where the ionome of a specimen is translated to its closest point or region where high plant performance is reported. The approach is applied to a data set comprising macro- and oligo-elements measured in blueberry leaves from Québec, Canada.
From the bench to a grander vision
Adriana Bankston

Adriana Bankston

January 24, 2018
As a kid, I was always very diligent in school and took it very seriously. As I was also curious and enjoyed a challenge, science was a good field for me to pursue. Plus, I grew up in a family of scientists, with both my parents and grandparents doing it. But that didn’t necessarily mean I knew how academia worked.  I moved to the U.S. after high school, graduated from college (with a B.S. from Clemson University), and attended graduate school at Emory University. While I had good grades and test scores, I still had a lot to learn about doing research in spite of having worked in a lab for one year prior to graduate school. But I knew that I enjoyed the bench work enough to pursue a graduate education, and I wanted to learn the scientific way of thinking. I had a really excellent graduate mentor (also female) who taught me everything I know about science. She taught me how to design experiments and interpret data, and pointed out when I was doing things wrong. She always pushed me to do better in multiple aspects of being a scientist, and taught me to speak up when I had a question or a thought, no matter how small it might have been. This ultimately allowed me to become more confident in my abilities as a scientist. She also managed work-life balance extremely well, which was really inspiring to see and proved to be very useful for me later. Overall, she was an amazing mentor and role model. Graduate school was pretty comfortable. I wasn't eligible to apply for many fellowships (at least not until I obtained my U.S. citizenship), but luckily the lab was well funded during my time there, which alleviated some pressure. I didn’t seek additional mentors because I felt that her guidance could point me in the right direction, which, at the time, was still an academic career. I also didn't really consider other career options during this time - if I did, I probably would have approached my scientific training differently. During my postdoctoral training, I started exploring other careers, although academia was still on the table. Many changes took place in my life during this time, which allowed me to mature in several ways. I still carried with me the confidence I had gained during graduate school, which materialized into wanting to become a leader in my field of choice. But while examining potential careers, I also kept an open mind. I attended my first national meeting related to postdoctoral issues (but unrelated to my bench research), which peaked my interest in this area. Together with another postdoc at the university, I subsequently established a career seminar series as a resource for postdocs to hear from professionals in non-academic careers. While I didn’t realize this at the time, the seminar had the potential to change the local academic culture. Trainees came up to me and thanked me for creating this resource, which made me feel good in so many ways. At some point I noticed that some of them were regularly attending the events, and also seemed to be asking more questions and interact more frequently with some of the speakers following their talks. This was a great experience. After that, I organized regional symposia to connect trainees to each other, and got involved with national organizations focused on training and policy for graduate students and postdocs. During this time, I began to network with experts in these areas, and to speak up about certain issues in academia. As I participated in more of these activities on the side of my postdoctoral work, I eventually decided to follow these strong interests that I was developing instead of trying to stay in academia. So, I quit my postdoc and continued to explore what I was really interested in doing, but now with a slightly more clear direction. As luck would have it, I obtained a travel award to attend a science advocacy meeting in Boston (organized by Future of Research and other groups), which interestingly took place during my last month as a postdoc. That meeting got me hooked on studying academia and advocating for scientists, although my interests were fairly broad at that point. But these topics seemed to fit me like a glove, and I knew that I had to get more involved with the group.The rest is history. At the Future of Research, I was fortunate enough to be involved early on with a project on tracking postdoc salaries nationally, which isn't something I ever imagined myself doing but I loved it. This experience also opened me up to the idea of trying new things and going with the flow, instead of planning my next move in detail as I had always done. Overtime, this project gave me a sense of purpose and direction while still figuring out my path. And no matter what else I did during this time, I always came back to that feeling of passion that I had developed for trying to create evidence-based change in academia, while advocating for transparency in the system. I was a bit surprised to see how naturally these ideas came to me, as I never knew that you could study something like this; nevertheless, I found it extremely fascinating. I later reflected upon why it was so easy for me to engage in this area, and realized that it essentially blended multiple aspects of my personality: 1) an interest in doing research with a purpose; 2) the feeling that I am making a difference with my work; 3) speaking up for a particular cause and backing it up with data; and 4) I had always been a bit of rebel, which worked well for wanting to challenge the status quo.  I finally felt that my life had a purpose and direction that I was happy to pursue. Without going into details about my contributions (see more on my website), volunteering for a cause I believe in (and knowing what that is) has been a very powerful motivator for engaging in this type of work. In this context, taking ownership of science policy projects and leading them has been a very fulfilling experience. I am now on the Future of Research Board of Directors, which I feel is the ideal leadership position for me. In some ways, this is the opportunity I had been waiting for all this time, I just didn't know it, and obviously couldn’t have predicted it.I’m very grateful to this group for making my opinion feel valued and my voice count during a time when I wasn’t quite sure where I was going. I now know the direction I want my life to take, which is quite amazing in itself. I also know that just having a job isn’t sufficient for me without contributing to a grander vision and the potential to make the world a better place. And while I am still looking for a position in this area, I am now aware of the fact that I am much more motivated by a mission (than by money). I wouldn't have realized that if it weren't for my experience with Future of Research. Some of the lessons I’ve learned along the way are: 1) Don’t let anyone tell you how to live your life; 2) Volunteering can pay off if you are truly invested in it; and 3) Gratitude is a good way to live your life in general. As I try to keep these lessons in mind moving forward, perhaps the biggest one is still that taking some time to discover what is truly important to me will be a worthwhile long-term investment in my future.
Medical Students Fail Blood Pressure Measurement Challenge: Implications for Measurem...
Kenneth Royal, PhD

Kenneth Royal, PhD

January 19, 2018
Rakotz and colleagues (2017) recently published a paper describing a blood pressure (BP) challenge presented to 159 medical students representing 37 states at the American Medical Association’s House of Delegates Meeting in June 2015. The challenge consisted of correctly performing all 11 elements involved in a BP assessment using simulated patients. Alarmingly, only 1 of the 159 (0.63 %) medical students correctly performed all 11 elements. According to professional guidelines (Bickley & Szilagyi, 2013; and Pickering et al, 2005), the 11 steps involved in a proper BP assessment include: 1) allowing the patient to rest for 5 minutes before taking the measurement; 2) ensuring patient’s legs are uncrossed; 3) ensuring the patient’s feet are flat on the floor; 4) ensuring the patient’s arm is supported; 5) ensuring the sphygmomanometer’s cuff size is correct; 6) properly positing cuff over bare arm; 7) no talking; 8) ensuring the patient does not use his/her cell phone during the reading; 9) taking BP measurements in both arms; 10) identifying the arm with the higher reading as being clinically more important; and 11) identifying the correct arm to use when performing future BP assessment (the one with the higher measurement). All medical students involved in the study had confirmed that they had previously received training during medical school for measuring blood pressure. Further, because additional skills are necessary when using a manual sphygmomanometer, the authors of the study elected to provide all students with an automated device in order to remove students’ ability to use the auscultatory method correctly from the testing process. The authors of the study reported the average number of elements correctly performed was 4.1 (no SD was reported). While the results from this study likely will raise concern among the general public, scholars and practitioners of measurement may also find these results particularly troubling. There currently exists an enormous literature regarding blood pressure measurements. In fact, there are even academic journals devoted entirely to the study of blood pressure measurements (e.g., Blood Pressure Monitoring), and numerous medical journals devoted to the study of blood pressure (e.g., Blood Pressure, Hypertension, Integrated Blood Pressure Control, Kidney & Blood Pressure Research, High Blood Pressure & Cardiovascular Prevention, etc.) Further, a considerable body of literature also discusses the many BP instruments and methods available for collecting readings, and various statistical algorithms used to improve the precision of BP measurements. Yet, despite all the technological advances and sophisticated instruments available, these tools likely are of only limited utility until health care professionals utilize them correctly. Inappropriate inferences about BP readings could result in unintended consequences that jeopardize a patient’s health. In fact, research (Chobanian et al, 2003) indicates most human errors when measuring BP result in higher readings. Therefore, these costly errors may result in misclassifying prehypertension as stage 1 hypertension and beginning a treatment program that may be both unnecessary and harmful to a patient. This problem is further exacerbated when physicians put a patient on high blood pressure medication, as most physicians are extremely reluctant to take a patient off the medication, as the risks associated with stopping are extremely high. Further, continued usage of poor BP measurement techniques could result in patients whose blood pressure is under control to appear uncontrolled, thus escalating therapy that could further harm a patient. Until physicians can obtain accurate BP measurements, it is unlikely they can accurately differentiate those individuals who may need treatment from those that do not. So, I wish to ask the measurement community how we might assist healthcare professionals (and those responsible for their training) to correctly practice proper blood pressure measurement techniques? What lessons from psychometrics can parlay into the everyday practice of healthcare providers? Contributing practical solutions to this problem could go a long way in directly improving patient health and outcomes. References Pickering T, Hall JE, Appel LJ, et al. Recommendations for blood pressure measurement in humans and experimental animals part 1: blood pressure measurement in humans – a statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research. Hypertension. 2005;45:142‐161. Bickley LS, Szilagyi PG. Beginning the physical examination: general survey, vital signs and pain. In: Bickley LS, Szilagyi PG, eds. Bates’ Guide to Physical Examination and History Taking, 11th ed. Philadelphia, PA: Wolters Kluwer Health/ Lippincott Williams and Wilkins; 2013:119‐134. Chobanian AV, Bakris GL, Black HR, et al. Seventh report of the Joint National Committee on prevention, detection, evaluation and treatment of high blood pressure. Hypertension. 2003;42:1206‐1252. Rakotz MK, Townsend RR, Yang J, et al. Medical students and measuring blood pressure: Results from the American Medical Association Blood Pressure Check Challenge. Journal of Clinical Hypertension. 2017;19:614–619.
Trust Asymmetry
Dr. Percy Venegas

Percy Venegas

February 28, 2018
In the traditional financial sector, players profited from information asymmetries. In the blockchain financial system, they profit from trust asymmetries. Transactions are a flow, trust is a stock. Even if the information asymmetries across the medium of exchange are close to zero (as it is expected in a decentralized financial system), there exists a “trust imbalance” in the perimeter. This fluid dynamic follows Hayek's concept of monetary policy: “What we find is rather a continuum in which objects of various degrees of liquidity, or with values which can fluctuate independently of each other, shade into each other in the degree to which they function as money”. Trust-enabling structures are derived using Evolutionary Computing and Topological Data Analysis; trust dynamics are rendered using Fields Finance and the modeling of mass and information flows of Forrester's System Dynamics methodology. Since the levels of trust are computed from the rates of information flows (attention and transactions), trust asymmetries might be viewed as a particular case of information asymmetries -- albeit one in which hidden information can be accessed, of the sort that neither price nor on-chain data can provide. The key discovery is the existence of a “belief consensus” with trust metrics as the possible fundamental source of intrinsic value in digital assets. This research is relevant to policymakers, investors, and businesses operating in the real economy, who are looking to understand the structure and dynamics of digital asset-based financial systems. Its contributions are also applicable to any socio-technical system of value-based attention flows.
Integritas Panitia Tarung Bebas
Saortua Marbun

Saortua Marbun

January 18, 2018
Integritas Panitia Tarung Bebas Saortua Marbun\citep{marbun2018} Tahun ini dan tahun mendatang menjadi momen "tarung bebas" bernama Pemilihan Kepala Daerah hingga Pemilihan Kepala Negara, yang dimeriahkan dengan pemelilihan wakil rakyat DPD, DPR, DPRD. Pertarungan elektoral itu ditandai dengan perang opini bermuatan "kepentingan". Tema yang diulas "perlambatan pertumbuhan ekonomi", "utang negara". Para pihak terlibat dalam berebut ”piala” kekuasaan lima tahunan. Sementara itu pihak "pemberi kuasa" bergumul karena terabaikan, terpinggirkan, dan tidak terwakili. Meminjam istilah Ashraf \citet{ghani2005} yang menyebut bahwa rakyat terus menerus menjadi korban kapitalisme dan demokrasi. Rakyat tidak diuntungkan dari kedua sistem tersebut. Mereka tidak memiliki dana sebagai kapital. Uang hanya digunakan sebagai alat penukar, berbelanja kebutuhan hidup, sembako secukupnya. Upah minimum untuk memenuhi kebutuhan hidup layak masih menjadi perdebatan. Secara demokratis, rakyat tidak menjadi "pemilik kekuasaan" yang sesungguhnya. Keterlibatan rakyat dalam negara demokrasi hanya sebatas voting. Keuntungan maksimal dari "kekuasaan" justru dinikmati para pemenang pertarungan. Sekali pun kata "rakyat" ribuan kali diucap dan ditulis namun "rakyat miskin" dan "kemiskinan" terasa dijadikan "lipstick" oleh mereka yang sedang "gelojoh kuasa". Tentu saja, logis, bila "Penyelenggara Tarung Bebas" - dituntut bekerja keras. Kehadiran "mereka" sangat dibutuhkan karena dua kata "Kapitalisme dan Politik" kini sedang berada di arena. Demokrasi itu cenderung dimenangi oleh pemilik modal yang menjadi kontestan atau pemilik modal yang berada di belakang kandidat. Mereka piawai bermain peran, menggunakan berbagai trik dan jurus. Menggoreng identitas, menabur hoaks, kampanye hitam, politik uang dan seterusnya. Realita itu yang menjadi alasan bagi "Panitia" untuk memadu kekuatan, misalnya, mengerahkan Satuan Tugas Antik Politik Uang, anti hoaks, anti politik SARA dan anti-anti lainnya. Kita berharap agar kata kunci miskin, kemiskinan, rakyat miskin, petani miskin, nelayan miskin, rakyat lemah, tidak mampu, tidak berdaya - hendaknya tidak dijadikan pelengkap riasan wajah aktor demi mengkatrol elektabilitas. Nominal bantuan, sumbangan, derma, santunan yang diberikan jauh lebih kecil bila dibandingkan dengan "biaya advertorial", nominal itu semakin tidak berarti bila menimbang "manfaat intangible" yang dinikmati "donatur politis" yang muncul menjadi dermawan yang patut di "vote", di "like". Donasi "sembako, pengobatan gratis, pemeriksaan kesehatan gratis" berlabel elektoral itu bagaikan "beberapa butiran nasi" yang jatuh dari jemputan "tidak sebanding dengan sajian" yang terhidang di meja. Panitia diharapkan menjadi "wasit", menjadi "hakim pertandingan" yang benar, adil, berwibawa, independen, tepat aturan, tepat waktu. Bila wasit "lemah" kita pun tahu akibatnya. Bila hakim di sisi ring tidak adil maka pertarungan menjadi liar. Bila tata tertib permainan tidak ditegakkan, tentu hadiah kemenangan bisa jatuh ke tangan pecundang, curang. Firman Allah berkata, "Engkau tidak boleh memutarbalikkan keadilan, engkau tidak boleh memandang muka atau menerima suap, karena suap benar-benar membutakan mata orang-orang bijaksana dan memutarbalikkan perkataan orang-orang benar." \cite{sabdab}(Ulangan 16:19, MILT) Negeri ini sangat membutuhkan satgas yang terdiri dari, "Ia yang berjalan dalam kebenaran, dan yang berbicara dalam kejujuran, dia yang menolak keuntungan dari pemeras, yang mengebaskan tangannya dari mengambil uang suap, yang menutup telinganya dari mendengar tentang darah, dan menutup matanya dari melihat yang jahat." \cite{sabda}(Yesaya 33:15, MILT) Memberantas politik uang, tidak semudah mengatakan, "terima saja uangnya tetapi jangan pilih orangnya." Semua pihak perlu diingatkan bahwa, "karena cinta akan uang ialah sumber segala jenis kejahatan. Ada orang-orang yang sesat dari imannya dan menikam diri mereka dengan berbagai dukacita oleh karena mereka mengejar-ngejar uang." \cite{sabdaa}(1 Timotius 6:10, Shellabear) Menerima atau memberi uang dalam konteks demokrasi itu bertentangan dengan ajaran Tuhan.(*)
Menabur Benih Politik Berkeadaban
Saortua Marbun

Saortua Marbun

January 18, 2018
Menabur Benih Politik Berkeadaban\citep{Marbun2018} Saortua Marbun "Tampaknya Bawaslu dan Satgas Anti Money Politic, anti SARA, anti hoaks - kali ini tidak main-main. Jangan sampai dirimu tertangkap tangan kawan. Jika tidak takut dilihat oleh TUHAN, setidaknya takutlah pada kamera CCTV, hidden cameras, "mata-mata", kamera media massa." Begitulah isi salah satu cuitan yang diposting di media sosial.\citep{marbun2018a} Menurut Amartya Sen (2009) di dalam \citet{rido2017} , hakikat demokrasi adalah terdorongnya fungsi pembangun dalam pembentukan nilai-nilai dan pentingnya hakikat kehidupan manusia (kesejahteraan). Akan tetapi realitanya belum demikian. \citet{wattimena2018}, menulis bahwa "politik telah tercabut" dari keutamaan, tercabut dari spiritualitas, tercabut dari ilmu pengetahuan dan tercabut dari budaya. Oleh sebab itu Negara harus menunjukkan "taringnya" - agar hajatan politik - tahun ini dan seterusnya - kembali ke jati diri politik luhur, hakikat demokrasi. Politik tercabut dari keutamaan dan filsafat yang mendasarinya, sehingga ia berubah menjadi transaksi kekuasaan yang mengorbankan kepentingan rakyat luas. Politik yang sejatinya sebuah profesi luhur untuk mewujudkan konsensus demi kebaikan bersama melalui kebijakan cerdas dan keteladanan. Keluhuran profesi ini bagai "lenyap" ditelan oleh “syahwat” kekuasaan. Politik bermetamorfosa menjadi musuh-musuh kebaikan. Sikap dan perilaku berpolitik masa kini telah memisahkan diri dari narasi kesantunan. Internalisasi nilai dan kultur demokrasi berkeadaban pada level massa - terpinggirkan. Politik sudah tercabut dari ilmu pengetahuan. Beragam kebijakan politik diproduksi tanpa dasar rasionalitas, tanpa dukungan penelitian ilmiah yang bermutu. Berulangkali publik dibuat tidak berdaya oleh kebijakan politik yang dibuat atas dasar persekongkolan dengan pemilik modal yang korup, ujungnya merugikan kepentingan publik. Beragam kebijakan yang ada justru tidak masuk akal sehat, terkadang memperburuk persoalan yang sudah ada. Politik sudah tercabut dari spiritualitas atau cara hidup yang mengedepankan aspek-aspek kemanusiaan universal di dalam segala keputusan dan perilaku. Politik masa kini telah menghimpit spiritualitas menjadi sebatas agama yang dijadikan kendaraan dan menjadi topeng untuk menutupi aroma amis dari kebusukan. Sadar atau tidak, kampanye politik yang menunggangi agama menjadi salah satu indikasi bahwa aktor dan dalang di balik layar itu korup.   Politik sudah tercabut dari budaya, ia menjadi korban dari nilai-nilai yang diimport dari nilai-nilai Barat dan Timur Tengah. Alhasil, nilai-nilai luhur budaya setempat terkikis, lenyap. Politik yang tercabut dari budaya justru menciptakan keterasingan dan melahirkan kemiskinan dan kebodohan yang semakin parah di tengah masyarakat. Demokrasi masa kini telah menciptakan "permusuhan" antar pendukung - para pihak memandang yang lain sebagai musuh politik, mereka saling berhadap-hadapan pada pemilihan kepala desa, bupati, walikota, gubernur, legislatif hingga pemilihan presiden. Ketegangan elektoral tahun 2013 yang lalu rasanya masih segar, kini 2018 publik berharap demokrasi berlangsung dalam suasana teduh. Rasanya tepat membaca kembali tulisan \citet{thohari2014}, "Pada masa lalu perebutan kekuasaan dan takhta dilakukan dengan peperangan yang sarat dengan kekerasan dan pertumpahan darah. Politik demokrasi memberikan mekanisme "perebutan takhta" secara adil, sehat, dan berkeadaban melalui pemilihan umum. Maka, sangat ironis jika pemilihan umum yang mestinya berkeadaban itu kembali diperlakukan menjadi laksana peperangan perebutan takhta yang keras, kasar, dan brutal seperti masa pramodern dulu." Oleh sebab itu, keseriusan penyelenggara bersama seluruh pemangku kepentingan tentu sangat diperlukan dalam upaya menyemai benih politik yang sehat seraya memutus akar-akar politik yang bermasalah. Masa depan, keutuhan dan kesejahteraan bangsa menjadi taruhannya - bila upaya ini gagal maka politik akan menjadi mesin penghancur yang membawa petaka kemiskinan, penderitaan, kehancuran moral. Firman Allah berkata, "Bangsa yang tidak mendapat bimbingan dari TUHAN menjadi bangsa yang penuh kekacauan. Berbahagialah orang yang taat kepada hukum TUHAN." (Amsal 29:18, BIS) \cite{sabda}.  
Spatial  Vulnerability of Surface Water to Chemical Contamination in the Ngwerere Riv...
Mabvuso Christopher Sinda

Mabvuso Christopher Sinda

and 3 more

March 27, 2024
Concerns about water pollution from commercial agriculture, demographic changes, urbanisation, industry, and anthropogenic activities in the Ngwerere River peri-urban watershed (NPW) were the drivers of this study. This study focused on the spatial vulnerability of surface water to chemical pollution. The aim was to implement a rapid integrated ecosystem assessment system to analyse the predisposition of surface water to chemical pollution. The specific objectives were to evaluate water quality (WQ) parameters as indicators of chemical pollution in the Ngwerere River, determine the spatial vulnerability of water to chemical pollution, and establish linkages between chemical pollution and sources within the NPW. In this study, the ecosystems approach, was followed to assess pH, salinity, total dissolved solids (TDS), chemical oxygen demand (COD), Na+, and total suspended solids (TSS) to determine WQ for the functional integrity of some ecosystem services. The results show that pH ranged from 7–8 and COD from 4–36 mg L-1 O2. The results for other parameters were as follows: 60–163 mg L-1, 258–567 mg L-1, 22–60 mg L-1, and 2–710 mg L-1 for salinity, TDS, Na+, and TSS, respectively. The means were 7.62 ± 0.05 mg L-1, 118.50 ± 4.07 mg L-1, 437.91 ± 14.35 mg L-1, 17.45 ± 2.04 mg L-1 O2, 45.46 ± 1.53 mg L-1, and 192.79 ± 46.90 mg L-1 for pH, salinity, TDS, COD, Na+, and TSS, respectively. With the exception of COD and Na+, which exceeded the relevant environmental standards, all measured parameters were within acceptable ranges, indicating the functional integrity of the ecosystem. Furthermore, the trends in concentration for all WQ parameters were indicative of chemical pollution. Three watershed positions, upstream catchment (USC), midstream catchment (MSC), and downstream catchment (DSC), were assessed for vulnerability to chemical pollution. USC and MSC were more vulnerable based on all parameters, excluding TSS, which had higher concentrations at DSC. Linkages were established between chemical pollution in the river and pollution sources. With the exception of TSS, all parameter concentrations were high in USC and MSC due to their proximity to point source and non-point source pollution. In contrast, high concentrations of TSS were observed in DSC due to the accumulation of loading. Further work should be conducted to establish the ecosystem services present in NPW. Thus, assessments will need be refined and focused on specific environmental concerns for NPW.  Keywords: chemical pollution, ecosystems, freshwater, water quality, Ngwerere River peri-urban watershed
Backstory of  "Molcas 8: New capabilities for multiconfigurational quantum chemical c...
roland.lindh
Josh Nicholson

Roland Lindh

and 1 more

October 21, 2020
The subject paper\cite{Aquilante_2015} is the 5th paper in a series of papers\cite{Aquilante_2012,Roos_1990,Roos_1991,Veryazov_2004,Aquilante_2010} on the development of the MOLCAS program package.  In this short back story I will try to put the MOLCAS quantum chemistry program package into a brief historical context, shortly describe its development, and finally argue the case why papers like the subject paper is needed. The Molcas project started in 1989 by the theoretical chemistry group of  the late Prof. Björn Roos (see Figure 1) at Lund University, Sweden. The Swedish government struck a deal with the banks -- no increase of tax if they supported research. As a consequence of this the Molcas project materialised as a collaboration between IBM and the research group in Lund. Swedish theoretical chemistry had made a serious impart on the ab inition field at the time with contributions from researchers as Jan Almlöf, Per. E. M. Siegbahn, and Björn Roos - the former two the first Ph.D. students of the latter at Stockholm University. During this time the three of them had developed specialized software. Jan Almöf developed the Molecule\cite{Taylor_2017} program (computation of two-electron integrals), Per E. M. Siegbahn the MRCI code\cite{Siegbahn_1992,Roos_1977} (multi reference configuration interaction), and Björn O. Roos developed the CASSCF program\cite{Roos_1980} (complete active space self-consistent field). The goal of the Molcas project was to bring these pieces of software together in a single package designed for the IBM 3090 machine. Version 1.0 was distributed to the public in the late 1989.  Subsequent versions were released 1991, 1993, 1997, 2000, 2003, 2007, and 2014, covering version 2-8. All versions have been commercial versions.Today the package support multiple options and methods, and several hardware and software platforms. In 2005 the project started the "Molcas users' workshops" with the most recent workshop, the 8th, taking place in Uppsala November 2017.  Over the time and under the leadership of Björn Roos the project have had several success stories which have been seminal to the field. Let us mention two here, the complete active space 2nd order perturbation theory model\cite{Andersson_1993} and the complete active space state interaction\cite{Malmqvist_1989} method. From the formation of the project until about 2010 the project was mainly a project which was heavenly dominated by the Lund group, especially with respect to the leadership and strategic decisions,  however, with significant programming contributions from international collaborators. During 2009 Björn Roos retired from the project due to poor health\cite{Siegbahn_2011}, the baton was passed on to the long time Molcas co-developer Roland Lindh. Starting in Zürich 2013 the first "Molcas developers' workshop" took place. This has been followed by annual workshops at Alcalá (Spain), Siena (Italy), Vienna (Austria), Jerusalem (Israel) and this year at Leuven (Belgium). During the same time the project have developed from a national Swedish project -- dominated by a single Swedish research group -- to an international project with 30-40 active developers from some 10 different universities and institutes. The authors list of the subject paper is a testament to this development. In 2017 the project went open-source having the most significant part being released under the "Lesser GLP" license and is now distributed free of charge under the name of OpenMolcas.The subject paper was written on the request of the developers after one of our developers' workshops. People argued that a single paper, including the most recent developments, would be needed to make new developments and implementations known to the computational chemistry community. Additionally, the issue of lack of recognition and credits for software development was mentioned as one of the most important reasons for the need of a paper like the subject paper -- in many aspects a mini-review paper with no novel contributions. Normally hard-working software developers seldom get proper credits for their work, although it can and is fundamental to the ability to perform accurate quantum chemical simulations. In particular if this development is not associated with new wave-function models. Some of us, like me, contribute with significant software and methods, which are completely instrumental for the calculations, but hardly ever get any credit for this contribution. Let me give an example, I'll use my own contribution, two-electron integrals \cite{Lindh_1991} (since long also a part of MOLPRO), which without no calculations with the package would be possible,  as an example. Since its publication in 1991 this paper, on the computation of two-electron integrals, has, according to Google Scholar, attracted 258 citations. In the same time the two packages have attracted 7249 citation -- the use of the two-electron code was surely significant to the research the citations corresponds to but they handed credit to the developer in less than 3.6% of the time. If I would have designed the basis set, however, I would have been assured the full 7249 citations -- we always cited the basis sets but hardly ever how we efficiently compute the matrix elements they generate. There are several other developments and features in a quantum chemistry package which are not considered worthy citations but are still as essential to calculations. Here comes the paper, as the subject paper, in as an equalizer and makes sure that all developers of a package gets the credit and respect they deserve. With these type of papers around we kill two flies with one stone -- we reduce the number of references to theoretical papers and at the same time make sure that all developers get the recognition they all deserve and need.
Animals that eat animals could help people that grow food
Sam Williams

Sam Williams

January 13, 2018
People that grow food don't get along well with animals that eat other animals. We used things that take pictures by themselves to find out if there are more animals that eat other animals in places where people grow food, in places that people live, or in places where there are lots of trees. We found out that the highest animals that eat other animals live where people grow food. We also found that animals eat other animals eat lots of small animals. Animals that eat other animals could help people that grow food, by eating small animals that would eat the food they grow.This is part of the #upgoerfive challenge, explaining the findings of \citet{Williams_2017}.
← Previous 1 2 … 2739 2740 2741 2742 2743 2744 2745 2746 2747 … 2754 2755 Next →

| Powered by Authorea.com

  • Home