Filter Results
771 results
The construction of beaver dams facilitates a suite of hydrologic, hydraulic, geomorphic, and ecological feedbacks that increase stream complexity and channel–floodplain connectivity that benefit aquatic and terrestrial biota. Depending on where beaver build dams within a drainage network, they impact lateral and longitudinal connectivity by introducing roughness elements that fundamentally change the timing, delivery, and storage of water, sediment, nutrients, and organic matter. While the local effects of beaver dams on streams are well understood, broader coverage network models that predict where beaver dams can be built and highlight their impacts on connectivity across diverse drainage networks are lacking. Here we present a capacity model to assess the limits of riverscapes to support dam-building activities by beaver across physiographically diverse landscapes. We estimated dam capacity with freely and nationally-available inputs to evaluate seven lines of evidence: (1) reliable water source, (2) riparian vegetation conducive to foraging and dam building, (3) vegetation within 100m of edge of stream to support expansion of dam complexes and maintain large colonies, (4) likelihood that channel-spanning dams could be built during low flows, (5) the likelihood that a beaver dam is likely to withstand typical floods, (6) a suitable stream gradient that is neither too low to limit dam density nor too high to preclude the building or persistence of dams, and (7) a suitable river that is not too large to restrict dam building or persistence. Fuzzy inference systems were used to combine these controlling factors in a framework that explicitly also accounts for model uncertainty. The model was run for 40,561km of streams in Utah, USA, and portions of surrounding states, predicting an overall network capacity of 356,294 dams at an average capacity of 8.8dams/km. We validated model performance using 2852 observed dams across 1947km of streams. The model showed excellent agreement with observed dam densities where beaver dams were present. Model performance was spatially coherent and logical, with electivity indices that effectively segregated capacity categories. That is, beaver dams were not found where the model predicted no dams could be supported, beaver avoided segments that were predicted to support rare or occasional densities, and beaver preferentially occupied and built dams in areas predicted to have pervasive dam densities. The resulting spatially explicit reach-scale (250m long reaches) data identifies where dam-building activity is sustainable, and at what densities dams can occur across a landscape. As such, model outputs can be used to determine where channel–floodplain and wetland connectivity are likely to persist or expand by promoting increases in beaver dam densities.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
  • Document
  • File Set
The diagonal method (DM) is an innovative technique to obtain trustworthy survey data on an arbitrary categorical sensitive characteristic Y∗ (e.g., income classes, number of tax evasions). The estimation of the unconditional distribution of Y∗ from DM data has already been shown. Now, a covariate extension of the DM, that is, methods to investigate the dependence of Y∗ on nonsensitive covariates, is sought. For instance, the dependence of income on gender and profession may be under study. The covariate extensions of privacy-protecting survey designs are broadened by the covariate DM, especially because existing methods focus on binary Y∗. LR-DM estimation and stratum-wise estimation are described, where the former is based on a logistic regression model, leads to a generalized linear model, and requires computer-intensive methods. The existence of a certain regression estimate is investigated. Moreover, the connection between efficiency of the LR-DM estimation and the degree of privacy protection is studied and appropriate model parameters of the DM are searched. This problem of finding suitable model parameters is rarely addressed for privacy-protecting survey methods for multicategorical Y∗. Finally, the LR-DM estimation is compared with the stratum-wise estimation. MATLAB programs that conduct the presented estimations are provided as supplemental material.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
HPLC methods that use chromatographic retention times for gaining information about the properties of compounds for the purpose of designing drug molecules are reviewed. Properties, such as lipophilicity, protein binding, phospholipid binding, and acid/base character can be incorporated in the design of molecules with the right biological distribution and pharmacokinetic profile to become an effective drug. Standardization of various methodologies is suggested in order to obtain data suitable for inter-laboratory comparison. The published HPLC methods for lipophilicity, acid/base character, protein and phospholipid binding are critically reviewed and compared with each other using the solvation equation approach. One of the most important discussion points is how these data can be used in models and how they can influence the drug discovery process. Therefore, the published models for volume of distribution, unbound volume of distribution and drug efficiency are also discussed. The general relationships between the chemical structure and biomimetic HPLC properties are described in view of ranking and selecting putative drug molecules.
Data Types:
  • Other
  • Image
  • Tabular Data
Solid food disintegration within the stomach has a major role on the rate and final bioavailability of nutrients within the body. Understanding the link between food material properties and their behaviour during gastric digestion is key to the design of novel structures with enhanced functionalities. However, despite extensive research, the establishment of proper relationships has proved difficult. This work builds on the hypothesis that to bridge this knowledge gap a better understanding of the underlying mechanisms of food disintegration during digestion is needed. The purpose of this study is to propose a new protocol that, by uncoupling the physicochemical processes occurring during gastric digestion, allows for a more rigorous understanding of these mechanisms. Using steamed potatoes as a product model, this study aims to develop a viable methodology to characterize the role of gastric juice and compressive forces on the breakdown mechanics of solid foods during digestion. From a general viewpoint, this work not only reveals the importance of the parameter used to describe the size distribution of food particles on the interpretation of their breakdown behaviour, but also provides a new framework to characterize the mechanisms involved. Results also illustrate that food breakdown during gastric digestion might well not follow a unimodal behaviour, highlighting the need to characterize their performance based on parameters describing broad aspects of their particle size distribution rather than single point values. Arguably simplistic on its approach, this study illustrates how an improved understanding of the role of chemical and physical processes on the breakdown mechanics of solid foods can facilitate valid inferences with respect to their in-vivo performance during digestion. In particular, it shows that while the contraction forces occurring in the stomach can easily disintegrate the potato matrix at the molecular level, the continuous exposure to gastric juices will promote their disintegration into progressively smaller debris. A discussion on the challenges and future directions for the implementation of a more general and standardized protocol is provided. Not intended to reproduce the breakdown behaviour of foods during gastric digestion, but rather to characterize the mechanisms involved, the proposed protocol would open new opportunities to identify the material properties governing the performance of different foods upon ingestion.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
  • Document
In this paper we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of age-structured differential games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub-game perfect. By means of a simple age-structured advertising problem, we provide an application of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium.
Data Types:
  • Software/Code
  • Image
  • Document
In the convergence analysis of numerical methods for solving partial differential equations (such as finite element methods) one arrives at certain generalized eigenvalue problems, whose maximal eigenvalues need to be estimated as accurately as possible. We apply symbolic computation methods to the situation of square elements and are able to improve the previously known upper bound, given in “p- and hp-finite element methods” (Schwab, 1998), by a factor of 8. More precisely, we try to evaluate the corresponding determinant using the holonomic ansatz, which is a powerful tool for dealing with determinants, proposed by Zeilberger in 2007. However, it turns out that this method does not succeed on the problem at hand. As a solution we present a variation of the original holonomic ansatz that is applicable to a larger class of determinants, including the one we are dealing with here. We obtain an explicit closed form for the determinant, whose special form enables us to derive new and tight upper resp. lower bounds on the maximal eigenvalue, as well as its asymptotic behaviour.
Data Types:
  • Software/Code
  • Image
The ROLIS, CIVA-P and OSIRIS instruments on-board the Philae lander and the Rosetta orbiter acquired high-resolution images during the lander׳s descent towards the targeted landing site Agilkia, during its unexpected rebounds and at the final landing site Abydos on comet 67P/Churyumov–Gerasimenko. We, exploited these images, using robotic vision techniques, to locate the first touchdown on the surface of the comet nucleus, to reconstruct the lander׳s 3D trajectory during the descent and at the beginning of the first rebound, and to create local digital terrain models and depth maps of Agilkia and Abydos sites. Using the ROLIS close-up images we could also determine the actual movements of the lander between the beginning and the end of the First Science Sequence and we propose a new lander׳s bubble movement command meant to increase the probability for a successful drilling during a hypothetical future Long Term Science phase.
Data Types:
  • Other
  • Image
  • Video
  • Tabular Data
A model is presented for the supervised learning problem where the observations come from a fixed number of pre-specified groups, and the regression coefficients may vary sparsely between groups. The model spans the continuum between individual models for each group and one model for all groups. The resulting algorithm is designed with a high dimensional framework in mind. The approach is applied to a sentiment analysis dataset to show its efficacy and interpretability. One particularly useful application is for finding sub-populations in a randomized trial for which an intervention (treatment) is beneficial, often called the uplift problem. Some new concepts are introduced that are useful for uplift analysis. The value is demonstrated in an application to a real world credit card promotion dataset. In this example, although sending the promotion has a very small average effect, by targeting a particular subgroup with the promotion one can obtain a 15% increase in the proportion of people who purchase the new credit card.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
  • File Set
Geochemists and soil chemists commonly use parametrized sorption data to assess transport and impact of pollutants in the environment. However, this evaluation is often hampered by a lack of detailed sorption data analysis, which implies further non-accurate transport modeling. To this end, we present a novel software tool to precisely analyze and interpret sorption isotherm data. Our developed tool, coded in Visual Basic for Applications (VBA), operates embedded within the Microsoft ExcelTM environment. It consists of a user-defined function named ISOT_Calc, followed by a supplementary optimization Excel macro (Ref_GN_LM). The ISOT_Calc function estimates the solute equilibrium concentration in the aqueous and solid phases (Ce and q, respectively). Hence, it represents a very flexible way in the optimization of the sorption isotherm parameters, as it can be carried out over the residuals of q, Ce, or both simultaneously (i.e. orthogonal distance regression). The developed function includes the most usual sorption isotherm models, as predefined equations, as well as the possibility to easily introduce custom-defined ones. Regarding the Ref_GN_LM macro, it allows the parameter optimization by using a Levenberg-Marquardt modified Gauss-Newton iterative procedure. In order to evaluate the performance of the presented tool, both function and optimization macro have been applied to different sorption data examples described in the literature. Results showed that the optimization of the isotherm parameters was successfully achieved in all cases, indicating the robustness and reliability of the developed tool. Thus, the presented software tool, available to researchers and students for free, has proven to be a user-friendly and an interesting alternative to conventional fitting tools used in sorption data analysis.
Data Types:
  • Other
  • Image
  • Tabular Data
  • Document
A simulation testing framework was developed to evaluate the efficacy of detecting population trends of two sampling methods used to monitor inshore fish populations: angling and baited remote underwater stereo-video systems (stereo-BRUVs). The study is based on data collected as part of a long-term monitoring program in the Tsitsikamma National Park marine protected area, South Africa. As a test scenario, declining population trajectories of the most abundant species, Chrysoblephus laticeps, were simulated by introducing consecutive years of reduced recruitment over periods of 10 and 20 years applying an age-structured operating model. The operating model was designed to generate method-specific relative abundance indices and length–frequency data, using parameters derived from existing data collected in the long-term monitoring program. These were then fitted with an age-structured estimation model. Estimated spawner-biomass depletion was compared to the ‘true’ simulated population to quantify method-specific accuracy and bias using root-mean-squared error. Due to higher data variability and inherent size selectivity of angling, stereo-BRUVs provided more accurate spawner-biomass trends when describing a distinct population decline over 10 and 20 years. Additionally, spawner-biomass was found to be a more accurate population estimate than relative abundance indices due to the inclusion of population size structure information. The study demonstrates the potential of using simulation testing to evaluate sampling methods, given that the process generates the ‘true’ population with a known abundance and size structure.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
  • Document
  • File Set