Hubs
About
Live
Leaderboard

Most Reputable Users

Patrick Joyce
2912
Ida Rolf
232
Naoyuki Sunami
139
Iuliia Iuliia
200
Frach jGna
200
Alexander
200
Dmitry Chernyshov
201
Will McBurnie
314
Alexander K
202
Елена Белоснежная
200

Trending Papers in methodology

Trending
Today
6
A new microscopy sample medium that mimics water allows for the real-time imaging of living systems
From Paper: A polymer gel index-matched to water enables diverse applications in fluorescence microscopy
Published: Oct 2020
  • The compound, BIO-133, is a gel polymer with a light refraction index identical to water. This property gives the microscope better resolution compared to other mediums that allow for microfluidic imaging
  • BIO-133 enabled the immobilization of 1) Drosophila tissue, allowing the authors to track membrane puncta in pioneer neurons, and 2) C. elegans, which allowed the authors to image and inspect fine neural structure and to track pan-neuronal calcium activity over hundreds of volumes
Submitted by Patrick Joyce
Slide 1 of 1
8
The garden of forking paths: Why multiple comparisons can be a problem, even when there is no "fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time
From Paper: The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time
Published: Nov 2013
Researcher degrees of freedom can lead to a multiple comparisons problem, even in settings where researchers perform only a single analysis on their data. The problem is there can be a large number of potential comparisons when the details of data analysis are highly contingent on data, without the researcher having to perform any conscious procedure of fishing or examining multiple p-values. We discuss in the context of several examples of published papers where data-analysis decisions were theoretically-motivated based on previous literature, but where the details of data selection and analysis were not pre-specified and, as a result, were contingent on data
Submitted by Naoyuki Sunami
Slide 1 of 1
2
Parsimonious Mixed Models
Authors:
Douglas Bates, Reinhold Kliegl, Shravan Vasishth, Harald Baayen
Published: Jun 2015
The analysis of experimental data with mixed-effects models requires decisions about the specification of the appropriate random-effects structure. Recently, Barr, Levy, Scheepers, and Tily, 2013 recommended fitting `maximal' models with all possible random effect components included. Estimation of maximal models, however, may not converge. We show that failure to converge typically is not due to a suboptimal estimation algorithm, but is a consequence of attempting to fit a model that is too complex to be properly supported by the data, irrespective of whether estimation is based on maximum likelihood or on Bayesian hierarchical modeling with uninformative or weakly informative priors. Importantly, even under convergence, overparameterization may lead to uninterpretable models. We provide diagnostic tools for detecting overparameterization and guiding model simplification.
Retrieved from arxiv
6
Why hypothesis testers should spend less time testing hypotheses
  • The authors suggest that researchers consider which element of their derivation chain is the ‘weakest link’, such that strengthening it would have the largest effect on the extent to which an eventual hypothesis test can inform a theory.
  • The authors advocate for how non-confirmatory research activities can be used to lay the foundation for informative hypothesis tests
Submitted by Naoyuki Sunami
102
A new blood test based on DNA methylation patterns that can diagnose cancer four years before conventional methods
From Paper: Non-invasive early detection of cancer four years before conventional diagnosis using a blood test
Published: Jul 2020
  • This paper demonstrates that PanSeer detects cancer in 95% (95% CI: 89–98%) of asymptomatic individuals who were later diagnosed, though future longitudinal studies are required to confirm this result
  • The blood test, called PanSeer, detects five common types of cancer in 88% (95% CI: 80–93%) of post-diagnosis patients with a specificity of 96% (95% CI: 93–98%)
Submitted by Patrick Joyce
Slide 1 of 1
41
Rationales, design and recruitment of the Taizhou Longitudinal Study
Published: Jul 2009
Background: Rapid economic growth in China in the past decades has been accompanied by dramatic changes in lifestyle and environmental exposures. The burdens of non-communicable diseases, such as cardiovascular diseases, diabetes and cancer, have also increased substantially.Methods/design: We initiated a large prospective cohort–the Taizhou Longitudinal Study–in Taizhou (a medium-size city in China) to explore the environmental and genetic risk factors for common non-communicable diseases. The sample size of the cohort will be at least 100,000 adultsaged 30–80 years drawn from the general residents of the districts of Hailin, Gaogang, and Taixing (sample frame, 1.8 million) of Taizhou. A three-stage stratified sampling method will be applied.Baseline investigations include interviewer-administered questionnaire, anthropometric measurements, and collection of buccal mucosal cells and blood specimens. DNA will be extracted for genetic studies and serum samples will be used for biochemical examinations. A follow-upsurvey will be conducted every three years to obtain information on disease occurrence and information on selected lifestyle exposures. Study participants will be followed-up indefinitely by using a chronic disease register system for morbidity and cause-specific mortality. Information onnon-fatal events will be obtained for certain major categories of disease (e.g., cancer, stroke, myocardial infarction) through established registry systems.Discussion: The Taizhou Longitudinal Study will provide a good basis for exploring the roles of many important environmental factors (especially those concomitant with the economic transformation in China) for common chronic diseases, solely or via interaction with genetic factors.
Submitted by Patrick Joyce
Slide 1 of 1
14
A new technique allows researchers to visualize the location of mitochondrial proteins within mitochondria
From Paper: Assigning mitochondrial localization of dual localized proteins using a yeast Bi-Genomic Mitochondrial-Split-GFP
  • This new approach has certainly many potential applications and opens new avenues in the study of mitochondria and their communications with other compartments of the cell
  • Many mitochondrial proteins are originally transcribed in the nucleus and co-localize to mitochondria and the cytosol. These proteins are difficult to visualize because the fluorescent signal from the cytosolic version of mitochondrial proteins can drown out the fluorescent signal from the mitochondrial protein
Submitted by Ida Rolf
Slide 1 of 1
2
Single Nugget Kriging
Authors:
Minyong R. Lee, Art B. Owen
Published: Jul 2015
We propose a method with better predictions at extreme values than thestandard method of Kriging. We construct our predictor in two ways: bypenalizing the mean squared error through conditional bias and by penalizingthe conditional likelihood at the target function value. Our predictionexhibits robustness to the model mismatch in the covariance parameters, adesirable feature for computer simulations with a restricted number of datapoints. Applications on several functions show that our predictor is robust tothe non-Gaussianity of the function.
Retrieved from arxiv
33
Asymptotic Results on Adaptive False Discovery Rate Controlling Procedures Based on Kernel Estimators
Author:
Pierre Neuvial
Published: Mar 2010
The False Discovery Rate (FDR) is a commonly used type I error rate inmultiple testing problems. It is defined as the expected False DiscoveryProportion (FDP), that is, the expected fraction of false positives amongrejected hypotheses. When the hypotheses are independent, theBenjamini-Hochberg procedure achieves FDR control at any pre-specified level.By construction, FDR control offers no guarantee in terms of power, or type IIerror. A number of alternative procedures have been developed, includingplug-in procedures that aim at gaining power by incorporating an estimate ofthe proportion of true null hypotheses. In this paper, we study the asymptoticbehavior of a class of plug-in procedures based on kernel estimators of thedensity of the $p$-values, as the number $m$ of tested hypotheses grows toinfinity. In a setting where the hypotheses tested are independent, we provethat these procedures are asymptotically more powerful in two respects: (i) atighter asymptotic FDR control for any target FDR level and (ii) a broaderrange of target levels yielding positive asymptotic power. We also show thatthis increased asymptotic power comes at the price of slower, non-parametricconvergence rates for the FDP. These rates are of the form $m^{-k/(2k+1)}$,where $k$ is determined by the regularity of the density of the $p$-valuedistribution, or, equivalently, of the test statistics distribution. Theseresults are applied to one- and two-sided tests statistics for Gaussian andLaplace location models, and for the Student model.
Retrieved from arxiv
3
A gamma approximation to the Bayesian posterior distribution of a discrete parameter of the Generalized Poisson model
Author:
T. F. Khang
Published: Jun 2016
Let $X$ have a Generalized Poisson distribution with mean $kb$, where $b$ isa known constant in the unit interval and $k$ is a discrete, non-negativeparameter. We show that if an uninformative uniform prior for $k$ is assumed,then the posterior distribution of $k$ can be approximated using the gammadistribution when $b$ is small.
Retrieved from arxiv
Load More Papers